Test Report: Docker_Linux_crio_arm64 19678

                    
                      8ef5536409705b0cbf1ed8a719bbf7f792426b16:2024-09-20:36299
                    
                

Test fail (4/327)

Order failed test Duration
33 TestAddons/parallel/Registry 74.07
34 TestAddons/parallel/Ingress 153.31
36 TestAddons/parallel/MetricsServer 331.1
173 TestMultiControlPlane/serial/RestartCluster 137.04
x
+
TestAddons/parallel/Registry (74.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.03633ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-2gc7z" [c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004717442s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tbwxh" [6bb565a3-2192-4ce8-8582-11f1d9d8ec42] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00541574s
addons_test.go:338: (dbg) Run:  kubectl --context addons-244316 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-244316 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-244316 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.119573071s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-244316 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 ip
2024/09/20 19:38:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-244316
helpers_test.go:235: (dbg) docker inspect addons-244316:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7",
	        "Created": "2024-09-20T19:25:55.126788858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 720989,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T19:25:55.300608812Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7/hosts",
	        "LogPath": "/var/lib/docker/containers/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7-json.log",
	        "Name": "/addons-244316",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-244316:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-244316",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/105335214c4d900a78658ce80448d8e1b3a6ae42f7a4bc31c9c402b03cc84f4b-init/diff:/var/lib/docker/overlay2/abb52e4f5a7bf897f28cf92e83fcbaaa3eeab65622f14fe44da11027a9deb44f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/105335214c4d900a78658ce80448d8e1b3a6ae42f7a4bc31c9c402b03cc84f4b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/105335214c4d900a78658ce80448d8e1b3a6ae42f7a4bc31c9c402b03cc84f4b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/105335214c4d900a78658ce80448d8e1b3a6ae42f7a4bc31c9c402b03cc84f4b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-244316",
	                "Source": "/var/lib/docker/volumes/addons-244316/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-244316",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-244316",
	                "name.minikube.sigs.k8s.io": "addons-244316",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f3d1276f3986829b7ef05a9018d68f3626ebc86f1f53155e972dab26ef3188f",
	            "SandboxKey": "/var/run/docker/netns/3f3d1276f398",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-244316": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8bb19f13f00a01d1da94938835d45e58571681a0667d77334eb4d48ebd8f6ef5",
	                    "EndpointID": "84f0a8ea26206e832205d2bb50a56b3db3dc2ad8c485969f2e47f1627577b1a0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-244316",
	                        "3d82610f1fe4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-244316 -n addons-244316
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-244316 logs -n 25: (1.719336736s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-533694   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | -p download-only-533694              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| delete  | -p download-only-533694              | download-only-533694   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| start   | -o=json --download-only              | download-only-484642   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | -p download-only-484642              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| delete  | -p download-only-484642              | download-only-484642   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| delete  | -p download-only-533694              | download-only-533694   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| delete  | -p download-only-484642              | download-only-484642   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| start   | --download-only -p                   | download-docker-394536 | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | download-docker-394536               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-394536            | download-docker-394536 | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| start   | --download-only -p                   | binary-mirror-387387   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | binary-mirror-387387                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34931               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-387387              | binary-mirror-387387   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| addons  | enable dashboard -p                  | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | addons-244316                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | addons-244316                        |                        |         |         |                     |                     |
	| start   | -p addons-244316 --wait=true         | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:29 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:37 UTC | 20 Sep 24 19:37 UTC |
	|         | -p addons-244316                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-244316 addons disable         | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:37 UTC | 20 Sep 24 19:37 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-244316 addons                 | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-244316 addons                 | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-244316 ip                     | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	| addons  | addons-244316 addons disable         | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:25:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:25:29.773517  720494 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:25:29.773681  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:29.773717  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:25:29.773723  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:29.774046  720494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
	I0920 19:25:29.774682  720494 out.go:352] Setting JSON to false
	I0920 19:25:29.775868  720494 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":11279,"bootTime":1726849051,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 19:25:29.775943  720494 start.go:139] virtualization:  
	I0920 19:25:29.779178  720494 out.go:177] * [addons-244316] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:25:29.782484  720494 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:25:29.782599  720494 notify.go:220] Checking for updates...
	I0920 19:25:29.787949  720494 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:25:29.791244  720494 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:25:29.793888  720494 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	I0920 19:25:29.796579  720494 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:25:29.799156  720494 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:25:29.802100  720494 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:25:29.830398  720494 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:25:29.830533  720494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:25:29.884753  720494 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:25:29.875307304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:25:29.884872  720494 docker.go:318] overlay module found
	I0920 19:25:29.887812  720494 out.go:177] * Using the docker driver based on user configuration
	I0920 19:25:29.890512  720494 start.go:297] selected driver: docker
	I0920 19:25:29.890532  720494 start.go:901] validating driver "docker" against <nil>
	I0920 19:25:29.890547  720494 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:25:29.891202  720494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:25:29.946608  720494 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:25:29.93724064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:25:29.946823  720494 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 19:25:29.947062  720494 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:25:29.949812  720494 out.go:177] * Using Docker driver with root privileges
	I0920 19:25:29.952570  720494 cni.go:84] Creating CNI manager for ""
	I0920 19:25:29.952644  720494 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 19:25:29.952660  720494 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 19:25:29.952801  720494 start.go:340] cluster config:
	{Name:addons-244316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:25:29.957445  720494 out.go:177] * Starting "addons-244316" primary control-plane node in "addons-244316" cluster
	I0920 19:25:29.960190  720494 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 19:25:29.963127  720494 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:25:29.965720  720494 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 19:25:29.965816  720494 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:29.965854  720494 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0920 19:25:29.965880  720494 cache.go:56] Caching tarball of preloaded images
	I0920 19:25:29.965965  720494 preload.go:172] Found /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0920 19:25:29.965980  720494 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 19:25:29.966344  720494 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/config.json ...
	I0920 19:25:29.966373  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/config.json: {Name:mk6955f082c6754495d7aaba1d3a3077fbb595bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:29.982114  720494 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:25:29.982227  720494 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:25:29.982252  720494 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 19:25:29.982261  720494 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 19:25:29.982269  720494 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 19:25:29.982275  720494 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 19:25:47.951558  720494 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 19:25:47.951596  720494 cache.go:194] Successfully downloaded all kic artifacts
	I0920 19:25:47.951647  720494 start.go:360] acquireMachinesLock for addons-244316: {Name:mk0522c0afca04ad0b8b7308c1947c33a5b75632 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:25:47.951772  720494 start.go:364] duration metric: took 100.896µs to acquireMachinesLock for "addons-244316"
	I0920 19:25:47.951805  720494 start.go:93] Provisioning new machine with config: &{Name:addons-244316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:25:47.951885  720494 start.go:125] createHost starting for "" (driver="docker")
	I0920 19:25:47.953438  720494 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 19:25:47.953715  720494 start.go:159] libmachine.API.Create for "addons-244316" (driver="docker")
	I0920 19:25:47.953752  720494 client.go:168] LocalClient.Create starting
	I0920 19:25:47.953877  720494 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem
	I0920 19:25:48.990003  720494 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem
	I0920 19:25:49.508284  720494 cli_runner.go:164] Run: docker network inspect addons-244316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 19:25:49.524511  720494 cli_runner.go:211] docker network inspect addons-244316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 19:25:49.524596  720494 network_create.go:284] running [docker network inspect addons-244316] to gather additional debugging logs...
	I0920 19:25:49.524617  720494 cli_runner.go:164] Run: docker network inspect addons-244316
	W0920 19:25:49.541131  720494 cli_runner.go:211] docker network inspect addons-244316 returned with exit code 1
	I0920 19:25:49.541164  720494 network_create.go:287] error running [docker network inspect addons-244316]: docker network inspect addons-244316: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-244316 not found
	I0920 19:25:49.541203  720494 network_create.go:289] output of [docker network inspect addons-244316]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-244316 not found
	
	** /stderr **
	I0920 19:25:49.541314  720494 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:25:49.555672  720494 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004c8400}
	I0920 19:25:49.555719  720494 network_create.go:124] attempt to create docker network addons-244316 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 19:25:49.555776  720494 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-244316 addons-244316
	I0920 19:25:49.624569  720494 network_create.go:108] docker network addons-244316 192.168.49.0/24 created
	I0920 19:25:49.624607  720494 kic.go:121] calculated static IP "192.168.49.2" for the "addons-244316" container
	I0920 19:25:49.624710  720494 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 19:25:49.638281  720494 cli_runner.go:164] Run: docker volume create addons-244316 --label name.minikube.sigs.k8s.io=addons-244316 --label created_by.minikube.sigs.k8s.io=true
	I0920 19:25:49.656152  720494 oci.go:103] Successfully created a docker volume addons-244316
	I0920 19:25:49.656249  720494 cli_runner.go:164] Run: docker run --rm --name addons-244316-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-244316 --entrypoint /usr/bin/test -v addons-244316:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 19:25:50.909463  720494 cli_runner.go:217] Completed: docker run --rm --name addons-244316-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-244316 --entrypoint /usr/bin/test -v addons-244316:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (1.253168687s)
	I0920 19:25:50.909494  720494 oci.go:107] Successfully prepared a docker volume addons-244316
	I0920 19:25:50.909519  720494 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:50.909540  720494 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 19:25:50.909613  720494 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-244316:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 19:25:55.044839  720494 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-244316:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.135176316s)
	I0920 19:25:55.044877  720494 kic.go:203] duration metric: took 4.135334236s to extract preloaded images to volume ...
	W0920 19:25:55.045079  720494 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 19:25:55.045238  720494 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 19:25:55.111173  720494 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-244316 --name addons-244316 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-244316 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-244316 --network addons-244316 --ip 192.168.49.2 --volume addons-244316:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 19:25:55.497137  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Running}}
	I0920 19:25:55.513667  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:25:55.540765  720494 cli_runner.go:164] Run: docker exec addons-244316 stat /var/lib/dpkg/alternatives/iptables
	I0920 19:25:55.618535  720494 oci.go:144] the created container "addons-244316" has a running status.
	I0920 19:25:55.618561  720494 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa...
	I0920 19:25:55.937892  720494 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 19:25:55.968552  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:25:56.000829  720494 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 19:25:56.000849  720494 kic_runner.go:114] Args: [docker exec --privileged addons-244316 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 19:25:56.069963  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:25:56.090448  720494 machine.go:93] provisionDockerMachine start ...
	I0920 19:25:56.090542  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:56.110367  720494 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:56.110637  720494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:25:56.110647  720494 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:25:56.305168  720494 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-244316
	
	I0920 19:25:56.305282  720494 ubuntu.go:169] provisioning hostname "addons-244316"
	I0920 19:25:56.305398  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:56.346342  720494 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:56.346689  720494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:25:56.346713  720494 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-244316 && echo "addons-244316" | sudo tee /etc/hostname
	I0920 19:25:56.522053  720494 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-244316
	
	I0920 19:25:56.522136  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:56.542986  720494 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:56.543222  720494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:25:56.543240  720494 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-244316' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-244316/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-244316' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:25:56.688866  720494 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:25:56.688893  720494 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-712952/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-712952/.minikube}
	I0920 19:25:56.688933  720494 ubuntu.go:177] setting up certificates
	I0920 19:25:56.688948  720494 provision.go:84] configureAuth start
	I0920 19:25:56.689025  720494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-244316
	I0920 19:25:56.706017  720494 provision.go:143] copyHostCerts
	I0920 19:25:56.706108  720494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem (1082 bytes)
	I0920 19:25:56.706235  720494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem (1123 bytes)
	I0920 19:25:56.706299  720494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem (1675 bytes)
	I0920 19:25:56.706352  720494 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem org=jenkins.addons-244316 san=[127.0.0.1 192.168.49.2 addons-244316 localhost minikube]
	I0920 19:25:57.019466  720494 provision.go:177] copyRemoteCerts
	I0920 19:25:57.019547  720494 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:25:57.019592  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.036410  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.138382  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 19:25:57.166107  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 19:25:57.190820  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 19:25:57.215294  720494 provision.go:87] duration metric: took 526.319417ms to configureAuth
	I0920 19:25:57.215365  720494 ubuntu.go:193] setting minikube options for container-runtime
	I0920 19:25:57.215581  720494 config.go:182] Loaded profile config "addons-244316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:57.215698  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.232471  720494 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:57.232769  720494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:25:57.232792  720494 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:25:57.476783  720494 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:25:57.476851  720494 machine.go:96] duration metric: took 1.386383012s to provisionDockerMachine
	I0920 19:25:57.476877  720494 client.go:171] duration metric: took 9.523113336s to LocalClient.Create
	I0920 19:25:57.476912  720494 start.go:167] duration metric: took 9.523196543s to libmachine.API.Create "addons-244316"
	I0920 19:25:57.476938  720494 start.go:293] postStartSetup for "addons-244316" (driver="docker")
	I0920 19:25:57.476964  720494 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:25:57.477048  720494 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:25:57.477143  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.493786  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.597872  720494 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:25:57.601104  720494 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 19:25:57.601148  720494 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 19:25:57.601160  720494 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 19:25:57.601168  720494 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 19:25:57.601178  720494 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-712952/.minikube/addons for local assets ...
	I0920 19:25:57.601253  720494 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-712952/.minikube/files for local assets ...
	I0920 19:25:57.601279  720494 start.go:296] duration metric: took 124.321395ms for postStartSetup
	I0920 19:25:57.601598  720494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-244316
	I0920 19:25:57.617888  720494 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/config.json ...
	I0920 19:25:57.618195  720494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:25:57.618252  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.634402  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.737760  720494 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 19:25:57.742480  720494 start.go:128] duration metric: took 9.790578414s to createHost
	I0920 19:25:57.742508  720494 start.go:83] releasing machines lock for "addons-244316", held for 9.790720023s
	I0920 19:25:57.742594  720494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-244316
	I0920 19:25:57.760148  720494 ssh_runner.go:195] Run: cat /version.json
	I0920 19:25:57.760203  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.760211  720494 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:25:57.760279  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.783475  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.784627  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.880299  720494 ssh_runner.go:195] Run: systemctl --version
	I0920 19:25:58.009150  720494 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:25:58.154311  720494 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 19:25:58.159113  720494 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:25:58.179821  720494 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0920 19:25:58.179900  720494 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:25:58.211641  720494 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 19:25:58.211670  720494 start.go:495] detecting cgroup driver to use...
	I0920 19:25:58.211707  720494 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 19:25:58.211764  720494 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:25:58.227213  720494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:25:58.239238  720494 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:25:58.239307  720494 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:25:58.254293  720494 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:25:58.268754  720494 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:25:58.352765  720494 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:25:58.452761  720494 docker.go:233] disabling docker service ...
	I0920 19:25:58.452850  720494 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:25:58.472668  720494 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:25:58.485779  720494 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:25:58.573268  720494 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:25:58.666873  720494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:25:58.679533  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:25:58.698607  720494 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:25:58.698720  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.709753  720494 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:25:58.709850  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.721514  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.732020  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.743803  720494 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:25:58.754937  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.765725  720494 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.784156  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.795101  720494 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:25:58.804507  720494 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:25:58.814571  720494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:25:58.906740  720494 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:25:59.036841  720494 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:25:59.037033  720494 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:25:59.041940  720494 start.go:563] Will wait 60s for crictl version
	I0920 19:25:59.042029  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:25:59.046343  720494 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:25:59.091228  720494 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0920 19:25:59.091392  720494 ssh_runner.go:195] Run: crio --version
	I0920 19:25:59.133146  720494 ssh_runner.go:195] Run: crio --version
	I0920 19:25:59.173996  720494 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0920 19:25:59.175094  720494 cli_runner.go:164] Run: docker network inspect addons-244316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:25:59.194435  720494 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 19:25:59.198004  720494 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:25:59.208671  720494 kubeadm.go:883] updating cluster {Name:addons-244316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:25:59.208837  720494 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:59.208896  720494 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:25:59.284925  720494 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:25:59.284952  720494 crio.go:433] Images already preloaded, skipping extraction
	I0920 19:25:59.285011  720494 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:25:59.326795  720494 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:25:59.326826  720494 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:25:59.326836  720494 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0920 19:25:59.326938  720494 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-244316 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:25:59.327033  720494 ssh_runner.go:195] Run: crio config
	I0920 19:25:59.400041  720494 cni.go:84] Creating CNI manager for ""
	I0920 19:25:59.400067  720494 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 19:25:59.400078  720494 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:25:59.400123  720494 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-244316 NodeName:addons-244316 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:25:59.400318  720494 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-244316"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:25:59.400413  720494 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:25:59.409466  720494 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:25:59.409543  720494 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:25:59.418255  720494 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0920 19:25:59.436798  720494 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:25:59.454812  720494 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0920 19:25:59.472784  720494 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 19:25:59.476021  720494 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:25:59.487107  720494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:25:59.575326  720494 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:25:59.590207  720494 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316 for IP: 192.168.49.2
	I0920 19:25:59.590230  720494 certs.go:194] generating shared ca certs ...
	I0920 19:25:59.590247  720494 certs.go:226] acquiring lock for ca certs: {Name:mk7d5a5d7b3ae5cfc59d92978e91627e15e3360b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:59.590385  720494 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key
	I0920 19:26:01.128707  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt ...
	I0920 19:26:01.128744  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt: {Name:mk1e04770eebce03242f88886403fc8aaa4cfe20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:01.129575  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key ...
	I0920 19:26:01.129604  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key: {Name:mka1be98ed1f78200fab01b6e2e3e6b22c64df46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:01.130163  720494 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key
	I0920 19:26:01.605890  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt ...
	I0920 19:26:01.605926  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt: {Name:mk03b39bb6b8251d65137612cf5e860b85386060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:01.606164  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key ...
	I0920 19:26:01.606193  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key: {Name:mk84b5b286008c7b39f1846c3a68b7450ec1aa33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:01.606319  720494 certs.go:256] generating profile certs ...
	I0920 19:26:01.606400  720494 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.key
	I0920 19:26:01.606424  720494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt with IP's: []
	I0920 19:26:02.051551  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt ...
	I0920 19:26:02.051591  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: {Name:mk4ce0de29683e22275174265e154c929722a947 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.051776  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.key ...
	I0920 19:26:02.051790  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.key: {Name:mk93067dbaede2ab18fb6ecd46883d29e619fb22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.051868  720494 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key.37f1b239
	I0920 19:26:02.051891  720494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt.37f1b239 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 19:26:02.516359  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt.37f1b239 ...
	I0920 19:26:02.516396  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt.37f1b239: {Name:mk04066709546d402e3fb86d226ae85095f6ecbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.516605  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key.37f1b239 ...
	I0920 19:26:02.516620  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key.37f1b239: {Name:mkf95597b5bfdb7c10c9fa46a41da8ae82c6dd73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.516735  720494 certs.go:381] copying /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt.37f1b239 -> /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt
	I0920 19:26:02.516829  720494 certs.go:385] copying /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key.37f1b239 -> /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key
	I0920 19:26:02.516886  720494 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.key
	I0920 19:26:02.516908  720494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.crt with IP's: []
	I0920 19:26:02.897643  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.crt ...
	I0920 19:26:02.897677  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.crt: {Name:mk09dc4a7bfb678ac6c7e5b6b5d0beeda1b27aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.897877  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.key ...
	I0920 19:26:02.897893  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.key: {Name:mkdfdda2c3f5759ba75abfb95a8a24312a55704c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.898086  720494 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 19:26:02.898132  720494 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem (1082 bytes)
	I0920 19:26:02.898162  720494 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:26:02.898190  720494 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem (1675 bytes)
	I0920 19:26:02.898795  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:26:02.926518  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 19:26:02.955404  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:26:02.983641  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:26:03.014867  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 19:26:03.046742  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:26:03.076519  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:26:03.109906  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:26:03.141479  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:26:03.168462  720494 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:26:03.189520  720494 ssh_runner.go:195] Run: openssl version
	I0920 19:26:03.195282  720494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:26:03.206954  720494 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:26:03.211167  720494 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:26:03.211239  720494 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:26:03.218399  720494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:26:03.227694  720494 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:26:03.230917  720494 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 19:26:03.230978  720494 kubeadm.go:392] StartCluster: {Name:addons-244316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:26:03.231066  720494 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:26:03.231129  720494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:26:03.270074  720494 cri.go:89] found id: ""
	I0920 19:26:03.270153  720494 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:26:03.280624  720494 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:26:03.291274  720494 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 19:26:03.291459  720494 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:26:03.302610  720494 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:26:03.302644  720494 kubeadm.go:157] found existing configuration files:
	
	I0920 19:26:03.302713  720494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:26:03.313478  720494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:26:03.313591  720494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:26:03.323025  720494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:26:03.332499  720494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:26:03.332592  720494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:26:03.341716  720494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:26:03.351516  720494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:26:03.351613  720494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:26:03.362846  720494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:26:03.376977  720494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:26:03.377091  720494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:26:03.387441  720494 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 19:26:03.434194  720494 kubeadm.go:310] W0920 19:26:03.433484    1189 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:26:03.436312  720494 kubeadm.go:310] W0920 19:26:03.435724    1189 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:26:03.478542  720494 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0920 19:26:03.547044  720494 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:26:21.145323  720494 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:26:21.145408  720494 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:26:21.145508  720494 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 19:26:21.145578  720494 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 19:26:21.145618  720494 kubeadm.go:310] OS: Linux
	I0920 19:26:21.145685  720494 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 19:26:21.145790  720494 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 19:26:21.145851  720494 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 19:26:21.145900  720494 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 19:26:21.145958  720494 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 19:26:21.146008  720494 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 19:26:21.146053  720494 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 19:26:21.146100  720494 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 19:26:21.146148  720494 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 19:26:21.146220  720494 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:26:21.146332  720494 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:26:21.146434  720494 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:26:21.146500  720494 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:26:21.148270  720494 out.go:235]   - Generating certificates and keys ...
	I0920 19:26:21.148377  720494 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:26:21.148446  720494 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:26:21.148515  720494 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 19:26:21.148586  720494 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 19:26:21.148656  720494 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 19:26:21.148741  720494 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 19:26:21.148806  720494 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 19:26:21.148925  720494 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-244316 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 19:26:21.148982  720494 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 19:26:21.149096  720494 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-244316 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 19:26:21.149163  720494 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 19:26:21.149230  720494 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 19:26:21.149278  720494 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 19:26:21.149337  720494 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:26:21.149392  720494 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:26:21.149453  720494 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:26:21.149507  720494 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:26:21.149572  720494 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:26:21.149629  720494 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:26:21.149710  720494 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:26:21.149782  720494 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:26:21.150997  720494 out.go:235]   - Booting up control plane ...
	I0920 19:26:21.151103  720494 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:26:21.151182  720494 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:26:21.151253  720494 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:26:21.151362  720494 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:26:21.151450  720494 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:26:21.151493  720494 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:26:21.151625  720494 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:26:21.151731  720494 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:26:21.151792  720494 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.002195614s
	I0920 19:26:21.151866  720494 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:26:21.151928  720494 kubeadm.go:310] [api-check] The API server is healthy after 5.502091486s
	I0920 19:26:21.152036  720494 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:26:21.152163  720494 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:26:21.152225  720494 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:26:21.152406  720494 kubeadm.go:310] [mark-control-plane] Marking the node addons-244316 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:26:21.152465  720494 kubeadm.go:310] [bootstrap-token] Using token: z8az5e.wrm7la03ugzjp7n2
	I0920 19:26:21.154261  720494 out.go:235]   - Configuring RBAC rules ...
	I0920 19:26:21.154478  720494 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:26:21.154586  720494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:26:21.154732  720494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:26:21.154909  720494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:26:21.155048  720494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:26:21.155175  720494 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:26:21.155311  720494 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:26:21.155368  720494 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:26:21.155442  720494 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:26:21.155457  720494 kubeadm.go:310] 
	I0920 19:26:21.155528  720494 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:26:21.155538  720494 kubeadm.go:310] 
	I0920 19:26:21.155614  720494 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:26:21.155625  720494 kubeadm.go:310] 
	I0920 19:26:21.155651  720494 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:26:21.155712  720494 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:26:21.155767  720494 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:26:21.155774  720494 kubeadm.go:310] 
	I0920 19:26:21.155828  720494 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:26:21.155837  720494 kubeadm.go:310] 
	I0920 19:26:21.155891  720494 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:26:21.155899  720494 kubeadm.go:310] 
	I0920 19:26:21.155953  720494 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:26:21.156030  720494 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:26:21.156101  720494 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:26:21.156108  720494 kubeadm.go:310] 
	I0920 19:26:21.156190  720494 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:26:21.156274  720494 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:26:21.156280  720494 kubeadm.go:310] 
	I0920 19:26:21.156362  720494 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token z8az5e.wrm7la03ugzjp7n2 \
	I0920 19:26:21.156468  720494 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9dcbae36a1cb65f9099573ad9fac7ebc036c2eab288a010b4e8645c68ec99bdd \
	I0920 19:26:21.156491  720494 kubeadm.go:310] 	--control-plane 
	I0920 19:26:21.156500  720494 kubeadm.go:310] 
	I0920 19:26:21.156585  720494 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:26:21.156594  720494 kubeadm.go:310] 
	I0920 19:26:21.156675  720494 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token z8az5e.wrm7la03ugzjp7n2 \
	I0920 19:26:21.156882  720494 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9dcbae36a1cb65f9099573ad9fac7ebc036c2eab288a010b4e8645c68ec99bdd 
	I0920 19:26:21.156920  720494 cni.go:84] Creating CNI manager for ""
	I0920 19:26:21.156929  720494 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 19:26:21.158769  720494 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 19:26:21.160058  720494 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 19:26:21.164256  720494 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 19:26:21.164293  720494 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 19:26:21.182687  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 19:26:21.476771  720494 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:26:21.476873  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:21.476920  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-244316 minikube.k8s.io/updated_at=2024_09_20T19_26_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-244316 minikube.k8s.io/primary=true
	I0920 19:26:21.502145  720494 ops.go:34] apiserver oom_adj: -16
	I0920 19:26:21.606164  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:22.106239  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:22.606963  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:23.106410  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:23.607082  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:24.106927  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:24.606481  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:24.731028  720494 kubeadm.go:1113] duration metric: took 3.254235742s to wait for elevateKubeSystemPrivileges
	I0920 19:26:24.731059  720494 kubeadm.go:394] duration metric: took 21.500084875s to StartCluster
	I0920 19:26:24.731077  720494 settings.go:142] acquiring lock: {Name:mk4ddd924228bcf0d3a34d801111d62307b61b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:24.731199  720494 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:26:24.731573  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/kubeconfig: {Name:mk7d8753aacb2df257bd5191c7b120c25eed71dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:24.732243  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 19:26:24.732578  720494 config.go:182] Loaded profile config "addons-244316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:26:24.732726  720494 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 19:26:24.732818  720494 addons.go:69] Setting yakd=true in profile "addons-244316"
	I0920 19:26:24.732834  720494 addons.go:234] Setting addon yakd=true in "addons-244316"
	I0920 19:26:24.732858  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.733357  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.733547  720494 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:26:24.733884  720494 addons.go:69] Setting cloud-spanner=true in profile "addons-244316"
	I0920 19:26:24.733908  720494 addons.go:234] Setting addon cloud-spanner=true in "addons-244316"
	I0920 19:26:24.733933  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.734414  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.734698  720494 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-244316"
	I0920 19:26:24.734731  720494 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-244316"
	I0920 19:26:24.734761  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.735207  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.737708  720494 addons.go:69] Setting registry=true in profile "addons-244316"
	I0920 19:26:24.738394  720494 addons.go:234] Setting addon registry=true in "addons-244316"
	I0920 19:26:24.738478  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.738988  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.743232  720494 addons.go:69] Setting storage-provisioner=true in profile "addons-244316"
	I0920 19:26:24.743320  720494 addons.go:234] Setting addon storage-provisioner=true in "addons-244316"
	I0920 19:26:24.743377  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.743891  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.748317  720494 addons.go:69] Setting default-storageclass=true in profile "addons-244316"
	I0920 19:26:24.748414  720494 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-244316"
	I0920 19:26:24.748919  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.756797  720494 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-244316"
	I0920 19:26:24.756882  720494 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-244316"
	I0920 19:26:24.757259  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.762988  720494 addons.go:69] Setting gcp-auth=true in profile "addons-244316"
	I0920 19:26:24.763039  720494 mustload.go:65] Loading cluster: addons-244316
	I0920 19:26:24.763255  720494 config.go:182] Loaded profile config "addons-244316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:26:24.763519  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.763925  720494 addons.go:69] Setting ingress=true in profile "addons-244316"
	I0920 19:26:24.763954  720494 addons.go:234] Setting addon ingress=true in "addons-244316"
	I0920 19:26:24.763999  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.764434  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.765037  720494 addons.go:69] Setting volcano=true in profile "addons-244316"
	I0920 19:26:24.765061  720494 addons.go:234] Setting addon volcano=true in "addons-244316"
	I0920 19:26:24.765092  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.765521  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.776878  720494 addons.go:69] Setting ingress-dns=true in profile "addons-244316"
	I0920 19:26:24.776920  720494 addons.go:234] Setting addon ingress-dns=true in "addons-244316"
	I0920 19:26:24.776986  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.777889  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.784771  720494 addons.go:69] Setting volumesnapshots=true in profile "addons-244316"
	I0920 19:26:24.784812  720494 addons.go:234] Setting addon volumesnapshots=true in "addons-244316"
	I0920 19:26:24.784851  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.785348  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.791077  720494 addons.go:69] Setting inspektor-gadget=true in profile "addons-244316"
	I0920 19:26:24.791114  720494 addons.go:234] Setting addon inspektor-gadget=true in "addons-244316"
	I0920 19:26:24.791157  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.791640  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.804082  720494 addons.go:69] Setting metrics-server=true in profile "addons-244316"
	I0920 19:26:24.804161  720494 out.go:177] * Verifying Kubernetes components...
	I0920 19:26:24.811690  720494 addons.go:234] Setting addon metrics-server=true in "addons-244316"
	I0920 19:26:24.811764  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.812275  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.738362  720494 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-244316"
	I0920 19:26:24.829194  720494 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-244316"
	I0920 19:26:24.829235  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.829723  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.850636  720494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:26:24.850683  720494 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 19:26:24.876752  720494 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 19:26:24.886705  720494 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 19:26:24.893510  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.895810  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 19:26:24.895828  720494 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 19:26:24.895890  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.904782  720494 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 19:26:24.906634  720494 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 19:26:24.911282  720494 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 19:26:24.911362  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 19:26:24.911513  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.914963  720494 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 19:26:24.915046  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 19:26:24.915159  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.920961  720494 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 19:26:24.921038  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 19:26:24.921135  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.958752  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 19:26:24.960237  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 19:26:24.960307  720494 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 19:26:24.964077  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.967158  720494 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 19:26:24.968275  720494 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-244316"
	I0920 19:26:24.968317  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.971597  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.988157  720494 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 19:26:24.988245  720494 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 19:26:24.988355  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.008361  720494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:26:25.008545  720494 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 19:26:25.021818  720494 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:26:25.027173  720494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 19:26:25.028992  720494 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 19:26:25.029020  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 19:26:25.029087  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.030454  720494 addons.go:234] Setting addon default-storageclass=true in "addons-244316"
	I0920 19:26:25.030503  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:25.030960  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:25.049224  720494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:26:25.057818  720494 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:26:25.057891  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:26:25.057977  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.069805  720494 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 19:26:25.069834  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 19:26:25.069903  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.073303  720494 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 19:26:25.074477  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 19:26:25.105627  720494 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:26:25.105745  720494 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:26:25.105935  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.122314  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.123447  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 19:26:25.124754  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W0920 19:26:25.125412  720494 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 19:26:25.130076  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 19:26:25.133405  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 19:26:25.135850  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 19:26:25.142193  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 19:26:25.143570  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 19:26:25.145106  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 19:26:25.146269  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 19:26:25.146298  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 19:26:25.146395  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.191104  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.212528  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.248811  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.265661  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.284008  720494 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:26:25.284030  720494 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:26:25.284093  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.287036  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.299833  720494 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 19:26:25.305881  720494 out.go:177]   - Using image docker.io/busybox:stable
	I0920 19:26:25.312108  720494 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 19:26:25.312162  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 19:26:25.312243  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.316161  720494 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:26:25.348052  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.368019  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.368977  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.369628  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.379009  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.401631  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.411379  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.628116  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 19:26:25.691608  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 19:26:25.691689  720494 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 19:26:25.754633  720494 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 19:26:25.754725  720494 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 19:26:25.783343  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 19:26:25.807311  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:26:25.811289  720494 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 19:26:25.811364  720494 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 19:26:25.814757  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 19:26:25.814831  720494 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 19:26:25.822773  720494 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:26:25.822846  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 19:26:25.847634  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:26:25.850739  720494 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 19:26:25.850817  720494 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 19:26:25.871877  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 19:26:25.874400  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 19:26:25.904843  720494 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 19:26:25.904924  720494 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 19:26:25.931038  720494 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:26:25.931145  720494 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:26:25.937014  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 19:26:25.937091  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 19:26:25.946447  720494 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 19:26:25.946510  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 19:26:25.952435  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 19:26:26.003047  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 19:26:26.003132  720494 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 19:26:26.055471  720494 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 19:26:26.055563  720494 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 19:26:26.058191  720494 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 19:26:26.058277  720494 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 19:26:26.108523  720494 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:26:26.108607  720494 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:26:26.120555  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 19:26:26.120639  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 19:26:26.143676  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 19:26:26.143751  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 19:26:26.159321  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 19:26:26.223265  720494 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 19:26:26.223347  720494 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 19:26:26.239958  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:26:26.290920  720494 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 19:26:26.291005  720494 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 19:26:26.309158  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 19:26:26.335256  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 19:26:26.335338  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 19:26:26.351619  720494 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 19:26:26.351703  720494 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 19:26:26.442556  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 19:26:26.442634  720494 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 19:26:26.510624  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 19:26:26.510716  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 19:26:26.521274  720494 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 19:26:26.521400  720494 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 19:26:26.561917  720494 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:26:26.562042  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 19:26:26.611747  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 19:26:26.611825  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 19:26:26.612177  720494 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 19:26:26.612229  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 19:26:26.627234  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:26:26.678176  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 19:26:26.678252  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 19:26:26.694345  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 19:26:26.778377  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 19:26:26.778461  720494 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 19:26:26.948977  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 19:26:26.949051  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 19:26:27.077619  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 19:26:27.077706  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 19:26:27.165214  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 19:26:27.165330  720494 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 19:26:27.323372  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 19:26:28.781000  720494 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.657515938s)
	I0920 19:26:28.781029  720494 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 19:26:28.782336  720494 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.466143903s)
	I0920 19:26:28.783478  720494 node_ready.go:35] waiting up to 6m0s for node "addons-244316" to be "Ready" ...
	I0920 19:26:28.800679  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.172462658s)
	I0920 19:26:29.525301  720494 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-244316" context rescaled to 1 replicas
	I0920 19:26:30.797646  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:31.267609  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.460211174s)
	I0920 19:26:31.267696  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.419987423s)
	I0920 19:26:31.267729  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.395787758s)
	I0920 19:26:31.267762  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.393293006s)
	I0920 19:26:31.267799  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.315298297s)
	I0920 19:26:31.267824  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.108422582s)
	I0920 19:26:31.268250  720494 addons.go:475] Verifying addon registry=true in "addons-244316"
	I0920 19:26:31.268429  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.484989303s)
	I0920 19:26:31.268458  720494 addons.go:475] Verifying addon ingress=true in "addons-244316"
	I0920 19:26:31.267881  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.02785039s)
	I0920 19:26:31.268830  720494 addons.go:475] Verifying addon metrics-server=true in "addons-244316"
	I0920 19:26:31.267910  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.958677932s)
	I0920 19:26:31.271284  720494 out.go:177] * Verifying registry addon...
	I0920 19:26:31.271359  720494 out.go:177] * Verifying ingress addon...
	I0920 19:26:31.272983  720494 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-244316 service yakd-dashboard -n yakd-dashboard
	
	I0920 19:26:31.275860  720494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 19:26:31.277001  720494 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 19:26:31.316121  720494 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 19:26:31.316161  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:31.317362  720494 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 19:26:31.317387  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0920 19:26:31.351356  720494 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 19:26:31.428421  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.80109277s)
	W0920 19:26:31.428552  720494 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 19:26:31.428610  720494 retry.go:31] will retry after 262.995193ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 19:26:31.428729  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.734302903s)
	I0920 19:26:31.692832  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:26:31.785593  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:31.787168  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:31.807816  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.48433629s)
	I0920 19:26:31.807856  720494 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-244316"
	I0920 19:26:31.812633  720494 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 19:26:31.816455  720494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 19:26:31.827502  720494 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 19:26:31.827532  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:32.319083  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:32.338243  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:32.343594  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:32.780997  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:32.782488  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:32.821766  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:33.292797  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:33.293772  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:33.294669  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:33.320656  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:33.550663  720494 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 19:26:33.550800  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:33.572839  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:33.737603  720494 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 19:26:33.782544  720494 addons.go:234] Setting addon gcp-auth=true in "addons-244316"
	I0920 19:26:33.782601  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:33.783145  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:33.786655  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:33.788546  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:33.798699  720494 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 19:26:33.798751  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:33.821497  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:33.823451  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:34.284102  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:34.289618  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:34.323390  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:34.779640  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:34.781007  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:34.820473  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:34.992350  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.299433758s)
	I0920 19:26:34.992431  720494 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.193714155s)
	I0920 19:26:34.995444  720494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:26:34.997869  720494 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 19:26:35.001709  720494 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 19:26:35.001756  720494 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 19:26:35.035728  720494 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 19:26:35.035759  720494 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 19:26:35.079944  720494 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 19:26:35.079979  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 19:26:35.102984  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 19:26:35.294596  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:35.295376  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:35.296628  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:35.323531  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:35.758392  720494 addons.go:475] Verifying addon gcp-auth=true in "addons-244316"
	I0920 19:26:35.761764  720494 out.go:177] * Verifying gcp-auth addon...
	I0920 19:26:35.765377  720494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 19:26:35.775895  720494 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 19:26:35.775929  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:35.783065  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:35.788830  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:35.820810  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:36.269858  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:36.279954  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:36.283043  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:36.320621  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:36.768866  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:36.779447  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:36.781676  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:36.820993  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:37.269034  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:37.282448  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:37.285567  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:37.321631  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:37.773878  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:37.779965  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:37.784905  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:37.788061  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:37.822000  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:38.269379  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:38.281874  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:38.282763  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:38.320741  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:38.769226  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:38.780873  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:38.782249  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:38.821403  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:39.269763  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:39.282689  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:39.283733  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:39.319865  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:39.770281  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:39.780666  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:39.781505  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:39.819986  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:40.269726  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:40.284516  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:40.288013  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:40.289324  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:40.321134  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:40.768854  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:40.781445  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:40.782576  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:40.820776  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:41.270401  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:41.282485  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:41.286141  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:41.320497  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:41.769918  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:41.781588  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:41.781944  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:41.820204  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:42.269713  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:42.283956  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:42.285628  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:42.289754  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:42.324052  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:42.768905  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:42.779837  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:42.781697  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:42.820416  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:43.269483  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:43.282444  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:43.289492  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:43.320876  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:43.769566  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:43.780213  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:43.781387  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:43.820582  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:44.268685  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:44.283172  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:44.284818  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:44.319863  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:44.768823  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:44.779384  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:44.780902  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:44.787478  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:44.820117  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:45.271913  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:45.292436  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:45.292763  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:45.321040  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:45.768511  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:45.780265  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:45.781678  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:45.819905  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:46.268442  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:46.281477  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:46.283481  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:46.321173  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:46.769096  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:46.780569  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:46.781603  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:46.820668  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:47.269903  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:47.283811  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:47.285562  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:47.288580  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:47.321196  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:47.769692  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:47.780998  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:47.781170  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:47.820425  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:48.269042  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:48.280340  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:48.282709  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:48.320815  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:48.775148  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:48.780544  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:48.780639  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:48.819936  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:49.268525  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:49.287431  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:49.289191  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:49.290872  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:49.319847  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:49.769384  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:49.779298  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:49.781188  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:49.820383  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:50.269301  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:50.282411  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:50.285240  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:50.321060  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:50.769473  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:50.779443  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:50.781486  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:50.820597  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:51.270112  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:51.282639  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:51.283088  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:51.320080  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:51.770106  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:51.780583  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:51.782027  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:51.787638  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:51.821886  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:52.268519  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:52.283176  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:52.284064  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:52.320872  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:52.769214  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:52.780683  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:52.781634  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:52.820510  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:53.268723  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:53.282200  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:53.283249  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:53.319884  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:53.769787  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:53.779902  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:53.781216  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:53.787956  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:53.820259  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:54.268727  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:54.284290  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:54.286675  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:54.320499  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:54.770197  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:54.780336  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:54.780886  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:54.872159  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:55.269901  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:55.283331  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:55.284928  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:55.322123  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:55.769377  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:55.786795  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:55.788318  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:55.792405  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:55.820273  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:56.269247  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:56.282020  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:56.282671  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:56.320941  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:56.768548  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:56.779663  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:56.781168  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:56.823458  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:57.270683  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:57.280849  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:57.289341  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:57.320313  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:57.770057  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:57.781886  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:57.782805  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:57.820734  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:58.269602  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:58.287535  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:58.289519  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:58.290728  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:58.320213  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:58.775347  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:58.780054  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:58.779009  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:58.820774  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:59.270294  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:59.286626  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:59.286677  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:59.320640  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:59.769280  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:59.778971  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:59.782087  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:59.820716  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:00.309244  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:00.318305  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:00.319591  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:00.339041  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:00.343407  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:00.769300  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:00.780065  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:00.781157  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:00.820504  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:01.269978  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:01.280960  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:01.281807  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:01.320569  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:01.770020  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:01.779716  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:01.780878  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:01.820495  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:02.268783  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:02.288170  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:02.289424  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:02.320500  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:02.769169  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:02.779328  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:02.780818  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:02.787295  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:02.820714  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:03.269333  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:03.282502  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:03.283193  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:03.320347  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:03.768910  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:03.779162  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:03.786884  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:03.820888  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:04.268561  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:04.282839  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:04.286144  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:04.319847  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:04.769755  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:04.779324  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:04.781769  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:04.787978  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:04.819936  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:05.269186  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:05.279197  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:05.282877  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:05.320475  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:05.768569  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:05.780966  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:05.781692  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:05.820438  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:06.268480  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:06.281239  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:06.282090  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:06.322308  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:06.769661  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:06.779803  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:06.781464  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:06.819852  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:07.268837  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:07.281876  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:07.284988  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:07.286363  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:07.320953  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:07.769044  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:07.779879  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:07.781635  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:07.820446  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:08.269819  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:08.279826  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:08.282143  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:08.320863  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:08.769566  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:08.780628  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:08.781408  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:08.820608  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:09.269486  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:09.282792  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:09.284787  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:09.288825  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:09.320872  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:09.771436  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:09.871052  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:09.871872  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:09.872778  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:10.268788  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:10.281335  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:10.282038  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:10.320121  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:10.790338  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:10.798021  720494 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 19:27:10.798094  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:10.802689  720494 node_ready.go:49] node "addons-244316" has status "Ready":"True"
	I0920 19:27:10.802757  720494 node_ready.go:38] duration metric: took 42.019246373s for node "addons-244316" to be "Ready" ...
	I0920 19:27:10.802790  720494 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:27:10.812816  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:10.826520  720494 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 19:27:10.826550  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:10.835433  720494 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-22l55" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.293900  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:11.305989  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:11.307104  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:11.332577  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:11.783357  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:11.784517  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:11.784931  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:11.849334  720494 pod_ready.go:93] pod "coredns-7c65d6cfc9-22l55" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.849418  720494 pod_ready.go:82] duration metric: took 1.013937392s for pod "coredns-7c65d6cfc9-22l55" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.849456  720494 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.868495  720494 pod_ready.go:93] pod "etcd-addons-244316" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.868569  720494 pod_ready.go:82] duration metric: took 19.076003ms for pod "etcd-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.868600  720494 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.875423  720494 pod_ready.go:93] pod "kube-apiserver-addons-244316" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.875560  720494 pod_ready.go:82] duration metric: took 6.929545ms for pod "kube-apiserver-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.875595  720494 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.879213  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:11.884216  720494 pod_ready.go:93] pod "kube-controller-manager-addons-244316" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.884288  720494 pod_ready.go:82] duration metric: took 8.628615ms for pod "kube-controller-manager-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.884318  720494 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2cdvm" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.988094  720494 pod_ready.go:93] pod "kube-proxy-2cdvm" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.988130  720494 pod_ready.go:82] duration metric: took 103.789214ms for pod "kube-proxy-2cdvm" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.988147  720494 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:12.269264  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:12.287208  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:12.289033  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:12.322571  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:12.388606  720494 pod_ready.go:93] pod "kube-scheduler-addons-244316" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:12.388638  720494 pod_ready.go:82] duration metric: took 400.478914ms for pod "kube-scheduler-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:12.388653  720494 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:12.770087  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:12.781393  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:12.785622  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:12.822319  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:13.269603  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:13.296337  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:13.296766  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:13.322590  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:13.769847  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:13.779693  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:13.782433  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:13.822091  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:14.269252  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:14.280182  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:14.284723  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:14.322263  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:14.398349  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:14.770832  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:14.783054  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:14.784559  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:14.822909  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:15.270387  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:15.285696  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:15.290910  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:15.326172  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:15.770026  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:15.783567  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:15.785272  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:15.824485  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:16.270241  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:16.284794  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:16.285654  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:16.323741  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:16.770988  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:16.786341  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:16.788159  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:16.824214  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:16.898178  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:17.268906  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:17.285452  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:17.297778  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:17.323090  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:17.770096  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:17.783132  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:17.791255  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:17.822351  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:18.269424  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:18.280994  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:18.282707  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:18.321372  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:18.769666  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:18.781587  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:18.784235  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:18.822682  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:19.269470  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:19.283677  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:19.288629  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:19.321699  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:19.396588  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:19.772980  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:19.780803  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:19.782719  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:19.875556  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:20.269492  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:20.291670  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:20.292866  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:20.337167  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:20.773159  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:20.784988  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:20.787988  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:20.872046  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:21.269963  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:21.282783  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:21.286803  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:21.322202  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:21.405583  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:21.783199  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:21.783670  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:21.784876  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:21.821833  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:22.269088  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:22.284065  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:22.285339  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:22.321055  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:22.770100  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:22.781884  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:22.782968  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:22.823045  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:23.270418  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:23.303569  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:23.309281  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:23.339078  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:23.769506  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:23.783194  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:23.785917  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:23.822439  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:23.897984  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:24.269340  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:24.291455  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:24.292763  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:24.323361  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:24.769968  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:24.782088  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:24.783038  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:24.822751  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:25.272308  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:25.283430  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:25.283627  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:25.374953  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:25.769005  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:25.780915  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:25.781787  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:25.823930  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:25.903531  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:26.269834  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:26.282530  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:26.283167  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:26.322054  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:26.772379  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:26.782423  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:26.783631  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:26.825085  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:27.269779  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:27.284418  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:27.284806  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:27.338146  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:27.769342  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:27.780314  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:27.781854  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:27.821476  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:28.269286  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:28.286594  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:28.287661  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:28.321326  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:28.396550  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:28.769726  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:28.786411  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:28.789226  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:28.823669  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:29.273075  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:29.294479  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:29.294798  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:29.321458  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:29.783611  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:29.801323  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:29.802103  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:29.822898  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:30.270237  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:30.281437  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:30.288878  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:30.323004  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:30.397272  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:30.770019  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:30.781553  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:30.784063  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:30.821814  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:31.274829  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:31.376343  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:31.376669  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:31.377851  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:31.770717  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:31.872819  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:31.874407  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:31.874951  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:32.269835  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:32.289378  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:32.296761  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:32.336615  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:32.770473  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:32.780243  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:32.783111  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:32.822024  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:32.895154  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:33.269100  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:33.285151  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:33.286306  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:33.321947  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:33.769592  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:33.785588  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:33.787506  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:33.823169  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:34.270017  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:34.296394  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:34.298155  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:34.323308  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:34.771050  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:34.779942  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:34.783226  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:34.823169  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:34.896141  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:35.271216  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:35.287615  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:35.287751  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:35.321772  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:35.769122  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:35.779720  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:35.783006  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:35.825488  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:36.271960  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:36.283707  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:36.285920  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:36.323409  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:36.769923  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:36.783227  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:36.784870  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:36.823594  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:36.898558  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:37.269755  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:37.293610  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:37.295883  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:37.324183  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:37.770650  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:37.787794  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:37.790021  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:37.825192  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:38.271469  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:38.286644  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:38.295621  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:38.372751  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:38.770626  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:38.783653  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:38.785086  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:38.828413  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:38.899046  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:39.269283  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:39.289657  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:39.290851  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:39.322100  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:39.769808  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:39.780111  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:39.782297  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:39.822102  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:40.269888  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:40.283081  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:40.289576  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:40.321540  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:40.771932  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:40.786292  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:40.787574  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:40.822514  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:40.902622  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:41.293096  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:41.293655  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:41.295135  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:41.383616  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:41.769623  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:41.780568  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:41.782685  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:41.821358  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:42.270092  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:42.283534  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:42.285074  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:42.323092  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:42.769472  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:42.783473  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:42.784385  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:42.821487  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:42.910866  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:43.269586  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:43.283137  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:43.284561  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:43.322062  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:43.770396  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:43.783706  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:43.785318  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:43.874138  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:44.270492  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:44.288382  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:44.289311  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:44.323291  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:44.772398  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:44.784708  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:44.789269  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:44.828934  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:45.270427  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:45.293926  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:45.297006  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:45.330624  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:45.395524  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:45.770375  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:45.780214  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:45.782897  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:45.821691  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:46.269691  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:46.287920  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:46.290547  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:46.321363  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:46.769449  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:46.780741  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:46.781790  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:46.821438  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:47.268967  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:47.283856  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:47.288484  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:47.321138  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:47.771747  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:47.782286  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:47.782867  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:47.821598  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:47.901855  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:48.270415  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:48.283534  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:48.293559  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:48.321254  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:48.769759  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:48.783475  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:48.784034  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:48.821245  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:49.269519  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:49.296283  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:49.297159  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:49.321855  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:49.769549  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:49.786083  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:49.787661  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:49.836279  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:49.905574  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:50.269880  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:50.284332  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:50.285274  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:50.326269  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:50.770583  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:50.784496  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:50.786832  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:50.821866  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:51.272774  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:51.288246  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:51.300589  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:51.328745  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:51.769682  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:51.784224  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:51.786399  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:51.822610  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:52.270010  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:52.284491  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:52.296634  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:52.321591  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:52.395168  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:52.769363  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:52.803871  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:52.804636  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:52.851054  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:53.269200  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:53.291143  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:53.292306  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:53.320768  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:53.769623  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:53.780255  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:53.781495  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:53.821099  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:54.270051  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:54.280279  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:54.286682  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:54.321233  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:54.397046  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:54.769210  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:54.781271  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:54.781800  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:54.821639  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:55.269499  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:55.283112  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:55.288430  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:55.321614  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:55.770291  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:55.780427  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:55.783414  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:55.821748  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:56.269112  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:56.297598  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:56.299043  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:56.322662  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:56.769391  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:56.782271  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:56.785981  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:56.822587  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:56.895669  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:57.269104  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:57.283331  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:57.285039  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:57.324318  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:57.770711  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:57.785695  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:57.786564  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:57.821295  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:58.270847  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:58.297070  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:58.299954  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:58.324589  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:58.770818  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:58.783118  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:58.784755  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:58.824340  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:58.898454  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:59.269608  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:59.288962  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:59.289893  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:59.327209  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:59.771301  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:59.779197  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:59.782085  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:59.821907  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:00.314086  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:28:00.315524  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:00.315879  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:00.377221  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:00.769699  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:00.782012  720494 kapi.go:107] duration metric: took 1m29.50615052s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 19:28:00.785641  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:00.821608  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:00.904903  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:01.273520  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:01.285242  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:01.322849  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:01.769313  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:01.783195  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:01.822914  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:02.274020  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:02.298141  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:02.326441  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:02.780076  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:02.785058  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:02.822665  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:03.268604  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:03.283597  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:03.321554  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:03.395718  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:03.768851  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:03.781855  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:03.823928  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:04.272216  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:04.283815  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:04.321512  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:04.769441  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:04.781911  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:04.821714  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:05.273285  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:05.286463  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:05.321734  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:05.400983  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:05.768739  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:05.782803  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:05.822520  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:06.271932  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:06.284198  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:06.322312  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:06.769439  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:06.781469  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:06.821798  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:07.269345  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:07.282286  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:07.321981  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:07.768935  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:07.782910  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:07.822376  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:07.899845  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:08.270993  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:08.281747  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:08.373020  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:08.769168  720494 kapi.go:107] duration metric: took 1m33.003789569s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 19:28:08.771038  720494 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-244316 cluster.
	I0920 19:28:08.772384  720494 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 19:28:08.773719  720494 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 19:28:08.781583  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:08.821332  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:09.282252  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:09.322029  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:09.783523  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:09.822921  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:09.902756  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:10.296308  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:10.322822  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:10.781762  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:10.822736  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:11.297609  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:11.321398  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:11.788233  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:11.824483  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:12.282873  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:12.322536  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:12.397997  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:12.782445  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:12.821032  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:13.288861  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:13.329878  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:13.781557  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:13.821498  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:14.290567  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:14.397119  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:14.401354  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:14.782289  720494 kapi.go:107] duration metric: took 1m43.505284147s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 19:28:14.821777  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:15.321877  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:15.834652  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:16.322429  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:16.822634  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:16.895178  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:17.323711  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:17.821392  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:18.326695  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:18.826947  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:18.895775  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:19.322832  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:19.825859  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:20.326263  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:20.825646  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:20.902662  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:21.322196  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:21.822435  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:22.322167  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:22.824989  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:23.322738  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:23.399445  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:23.822550  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:24.322519  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:24.824176  720494 kapi.go:107] duration metric: took 1m53.007723649s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 19:28:24.825669  720494 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0920 19:28:24.826761  720494 addons.go:510] duration metric: took 2m0.094026687s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0920 19:28:25.896052  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:27.896324  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:30.395200  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:32.895750  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:34.896053  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:37.396563  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:39.396837  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:41.895058  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:43.896042  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:45.907685  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:48.395599  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:50.895101  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:51.396083  720494 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"True"
	I0920 19:28:51.396121  720494 pod_ready.go:82] duration metric: took 1m39.007452648s for pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace to be "Ready" ...
	I0920 19:28:51.396140  720494 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-n79hn" in "kube-system" namespace to be "Ready" ...
	I0920 19:28:51.402155  720494 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-n79hn" in "kube-system" namespace has status "Ready":"True"
	I0920 19:28:51.402182  720494 pod_ready.go:82] duration metric: took 6.032492ms for pod "nvidia-device-plugin-daemonset-n79hn" in "kube-system" namespace to be "Ready" ...
	I0920 19:28:51.402206  720494 pod_ready.go:39] duration metric: took 1m40.599394134s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:28:51.402223  720494 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:28:51.402271  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:28:51.402336  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:28:51.456299  720494 cri.go:89] found id: "7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:28:51.456320  720494 cri.go:89] found id: ""
	I0920 19:28:51.456328  720494 logs.go:276] 1 containers: [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb]
	I0920 19:28:51.456393  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.460648  720494 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:28:51.460789  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:28:51.505091  720494 cri.go:89] found id: "a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:28:51.505116  720494 cri.go:89] found id: ""
	I0920 19:28:51.505128  720494 logs.go:276] 1 containers: [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56]
	I0920 19:28:51.505189  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.509129  720494 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:28:51.509207  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:28:51.562231  720494 cri.go:89] found id: "057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:28:51.562252  720494 cri.go:89] found id: ""
	I0920 19:28:51.562260  720494 logs.go:276] 1 containers: [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78]
	I0920 19:28:51.562319  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.566016  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:28:51.566137  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:28:51.603264  720494 cri.go:89] found id: "4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:28:51.603287  720494 cri.go:89] found id: ""
	I0920 19:28:51.603295  720494 logs.go:276] 1 containers: [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232]
	I0920 19:28:51.603353  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.606913  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:28:51.606987  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:28:51.652913  720494 cri.go:89] found id: "f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:28:51.652935  720494 cri.go:89] found id: ""
	I0920 19:28:51.652943  720494 logs.go:276] 1 containers: [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba]
	I0920 19:28:51.653002  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.656955  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:28:51.657040  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:28:51.704412  720494 cri.go:89] found id: "be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:28:51.704438  720494 cri.go:89] found id: ""
	I0920 19:28:51.704447  720494 logs.go:276] 1 containers: [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8]
	I0920 19:28:51.704534  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.708634  720494 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:28:51.708744  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:28:51.752746  720494 cri.go:89] found id: "4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:28:51.752776  720494 cri.go:89] found id: ""
	I0920 19:28:51.752785  720494 logs.go:276] 1 containers: [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1]
	I0920 19:28:51.752879  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.758970  720494 logs.go:123] Gathering logs for kube-apiserver [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb] ...
	I0920 19:28:51.759003  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:28:51.819975  720494 logs.go:123] Gathering logs for etcd [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56] ...
	I0920 19:28:51.820014  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:28:51.876012  720494 logs.go:123] Gathering logs for kube-proxy [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba] ...
	I0920 19:28:51.876043  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:28:51.921789  720494 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:28:51.921823  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:28:52.030887  720494 logs.go:123] Gathering logs for kube-controller-manager [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8] ...
	I0920 19:28:52.030941  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:28:52.115160  720494 logs.go:123] Gathering logs for kindnet [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1] ...
	I0920 19:28:52.115293  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:28:52.178170  720494 logs.go:123] Gathering logs for container status ...
	I0920 19:28:52.178239  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:28:52.241811  720494 logs.go:123] Gathering logs for kubelet ...
	I0920 19:28:52.241847  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 19:28:52.266767  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:28:52.267022  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:28:52.267252  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:28:52.267486  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:28:52.327738  720494 logs.go:123] Gathering logs for dmesg ...
	I0920 19:28:52.327779  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:28:52.346639  720494 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:28:52.346670  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:28:52.535292  720494 logs.go:123] Gathering logs for coredns [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78] ...
	I0920 19:28:52.535323  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:28:52.598442  720494 logs.go:123] Gathering logs for kube-scheduler [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232] ...
	I0920 19:28:52.598473  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:28:52.654339  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:28:52.654372  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 19:28:52.654455  720494 out.go:270] X Problems detected in kubelet:
	W0920 19:28:52.654469  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:28:52.654489  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:28:52.654500  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:28:52.654507  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:28:52.654512  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:28:52.654519  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:29:02.655826  720494 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:29:02.669891  720494 api_server.go:72] duration metric: took 2m37.936293093s to wait for apiserver process to appear ...
	I0920 19:29:02.669918  720494 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:29:02.669953  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:29:02.670013  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:29:02.709792  720494 cri.go:89] found id: "7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:29:02.709821  720494 cri.go:89] found id: ""
	I0920 19:29:02.709830  720494 logs.go:276] 1 containers: [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb]
	I0920 19:29:02.709905  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.713936  720494 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:29:02.714022  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:29:02.758325  720494 cri.go:89] found id: "a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:29:02.758351  720494 cri.go:89] found id: ""
	I0920 19:29:02.758360  720494 logs.go:276] 1 containers: [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56]
	I0920 19:29:02.758421  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.762432  720494 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:29:02.762517  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:29:02.816194  720494 cri.go:89] found id: "057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:29:02.816229  720494 cri.go:89] found id: ""
	I0920 19:29:02.816254  720494 logs.go:276] 1 containers: [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78]
	I0920 19:29:02.816358  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.820412  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:29:02.820495  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:29:02.868008  720494 cri.go:89] found id: "4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:29:02.868057  720494 cri.go:89] found id: ""
	I0920 19:29:02.868066  720494 logs.go:276] 1 containers: [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232]
	I0920 19:29:02.868176  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.872662  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:29:02.872784  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:29:02.922423  720494 cri.go:89] found id: "f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:29:02.922448  720494 cri.go:89] found id: ""
	I0920 19:29:02.922457  720494 logs.go:276] 1 containers: [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba]
	I0920 19:29:02.922570  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.926673  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:29:02.926808  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:29:02.974679  720494 cri.go:89] found id: "be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:29:02.974703  720494 cri.go:89] found id: ""
	I0920 19:29:02.974712  720494 logs.go:276] 1 containers: [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8]
	I0920 19:29:02.974773  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.978454  720494 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:29:02.978565  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:29:03.024328  720494 cri.go:89] found id: "4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:29:03.024410  720494 cri.go:89] found id: ""
	I0920 19:29:03.024433  720494 logs.go:276] 1 containers: [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1]
	I0920 19:29:03.024509  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:03.028984  720494 logs.go:123] Gathering logs for kube-proxy [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba] ...
	I0920 19:29:03.029059  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:29:03.078751  720494 logs.go:123] Gathering logs for kindnet [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1] ...
	I0920 19:29:03.078784  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:29:03.123529  720494 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:29:03.123565  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:29:03.267729  720494 logs.go:123] Gathering logs for kube-scheduler [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232] ...
	I0920 19:29:03.267765  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:29:03.319964  720494 logs.go:123] Gathering logs for kube-apiserver [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb] ...
	I0920 19:29:03.319999  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:29:03.377209  720494 logs.go:123] Gathering logs for etcd [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56] ...
	I0920 19:29:03.377254  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:29:03.430429  720494 logs.go:123] Gathering logs for coredns [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78] ...
	I0920 19:29:03.430466  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:29:03.479287  720494 logs.go:123] Gathering logs for kube-controller-manager [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8] ...
	I0920 19:29:03.479326  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:29:03.561312  720494 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:29:03.561350  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:29:03.668739  720494 logs.go:123] Gathering logs for container status ...
	I0920 19:29:03.668801  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:29:03.732250  720494 logs.go:123] Gathering logs for kubelet ...
	I0920 19:29:03.732283  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 19:29:03.763347  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:03.763596  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:29:03.763788  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:03.764019  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:29:03.824458  720494 logs.go:123] Gathering logs for dmesg ...
	I0920 19:29:03.824495  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:29:03.842781  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:29:03.842807  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 19:29:03.842859  720494 out.go:270] X Problems detected in kubelet:
	W0920 19:29:03.842874  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:03.842882  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:29:03.842891  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:03.842901  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:29:03.842906  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:29:03.842912  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:29:13.844440  720494 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:29:13.852275  720494 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 19:29:13.853640  720494 api_server.go:141] control plane version: v1.31.1
	I0920 19:29:13.853669  720494 api_server.go:131] duration metric: took 11.183744147s to wait for apiserver health ...
	I0920 19:29:13.853678  720494 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:29:13.853701  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:29:13.853773  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:29:13.894321  720494 cri.go:89] found id: "7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:29:13.894346  720494 cri.go:89] found id: ""
	I0920 19:29:13.894354  720494 logs.go:276] 1 containers: [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb]
	I0920 19:29:13.894418  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:13.898250  720494 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:29:13.898360  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:29:13.941458  720494 cri.go:89] found id: "a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:29:13.941492  720494 cri.go:89] found id: ""
	I0920 19:29:13.941500  720494 logs.go:276] 1 containers: [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56]
	I0920 19:29:13.941573  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:13.945504  720494 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:29:13.945587  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:29:13.986871  720494 cri.go:89] found id: "057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:29:13.986894  720494 cri.go:89] found id: ""
	I0920 19:29:13.986902  720494 logs.go:276] 1 containers: [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78]
	I0920 19:29:13.986962  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:13.990974  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:29:13.991061  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:29:14.034050  720494 cri.go:89] found id: "4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:29:14.034071  720494 cri.go:89] found id: ""
	I0920 19:29:14.034078  720494 logs.go:276] 1 containers: [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232]
	I0920 19:29:14.034141  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:14.038040  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:29:14.038128  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:29:14.081852  720494 cri.go:89] found id: "f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:29:14.081874  720494 cri.go:89] found id: ""
	I0920 19:29:14.081883  720494 logs.go:276] 1 containers: [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba]
	I0920 19:29:14.081944  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:14.085846  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:29:14.085928  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:29:14.133064  720494 cri.go:89] found id: "be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:29:14.133089  720494 cri.go:89] found id: ""
	I0920 19:29:14.133098  720494 logs.go:276] 1 containers: [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8]
	I0920 19:29:14.133162  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:14.136964  720494 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:29:14.137069  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:29:14.177123  720494 cri.go:89] found id: "4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:29:14.177146  720494 cri.go:89] found id: ""
	I0920 19:29:14.177155  720494 logs.go:276] 1 containers: [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1]
	I0920 19:29:14.177213  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:14.180998  720494 logs.go:123] Gathering logs for kube-controller-manager [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8] ...
	I0920 19:29:14.181035  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:29:14.260229  720494 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:29:14.260265  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:29:14.378494  720494 logs.go:123] Gathering logs for etcd [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56] ...
	I0920 19:29:14.378538  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:29:14.437059  720494 logs.go:123] Gathering logs for coredns [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78] ...
	I0920 19:29:14.437092  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:29:14.489260  720494 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:29:14.489292  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:29:14.630069  720494 logs.go:123] Gathering logs for kube-apiserver [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb] ...
	I0920 19:29:14.630100  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:29:14.706585  720494 logs.go:123] Gathering logs for kube-scheduler [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232] ...
	I0920 19:29:14.706623  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:29:14.762872  720494 logs.go:123] Gathering logs for kube-proxy [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba] ...
	I0920 19:29:14.762908  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:29:14.812852  720494 logs.go:123] Gathering logs for kindnet [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1] ...
	I0920 19:29:14.812885  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:29:14.865844  720494 logs.go:123] Gathering logs for container status ...
	I0920 19:29:14.865879  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:29:14.923028  720494 logs.go:123] Gathering logs for kubelet ...
	I0920 19:29:14.923065  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 19:29:14.957088  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:14.957339  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:29:14.957537  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:14.957775  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:29:15.020892  720494 logs.go:123] Gathering logs for dmesg ...
	I0920 19:29:15.020998  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:29:15.055155  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:29:15.055266  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 19:29:15.055358  720494 out.go:270] X Problems detected in kubelet:
	W0920 19:29:15.055399  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:15.055452  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:29:15.055502  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:15.055543  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:29:15.055594  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:29:15.055620  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:29:25.080014  720494 system_pods.go:59] 18 kube-system pods found
	I0920 19:29:25.080090  720494 system_pods.go:61] "coredns-7c65d6cfc9-22l55" [f57f469f-0a10-4755-8ba7-7313badf3e97] Running
	I0920 19:29:25.080099  720494 system_pods.go:61] "csi-hostpath-attacher-0" [ede42a9c-57cd-4862-a473-bb89ae43f460] Running
	I0920 19:29:25.080104  720494 system_pods.go:61] "csi-hostpath-resizer-0" [e16bf395-29bf-4855-9bc2-e53e3fa612e9] Running
	I0920 19:29:25.080109  720494 system_pods.go:61] "csi-hostpathplugin-l9l66" [e3c46cb7-cf62-418b-8b71-c758942cced2] Running
	I0920 19:29:25.080113  720494 system_pods.go:61] "etcd-addons-244316" [c4f43849-20a5-4644-a084-aec2f01202e7] Running
	I0920 19:29:25.080249  720494 system_pods.go:61] "kindnet-62dj5" [0cef216d-8448-40df-9149-c124400377d6] Running
	I0920 19:29:25.080257  720494 system_pods.go:61] "kube-apiserver-addons-244316" [c65c8858-0a0f-424e-8135-ee436e4010d3] Running
	I0920 19:29:25.080267  720494 system_pods.go:61] "kube-controller-manager-addons-244316" [6abd01ee-fed9-4a26-8c01-19cd3b5e4d53] Running
	I0920 19:29:25.080281  720494 system_pods.go:61] "kube-ingress-dns-minikube" [d7af063e-bdd0-4bcb-916b-81ed6229b4e4] Running
	I0920 19:29:25.080286  720494 system_pods.go:61] "kube-proxy-2cdvm" [dc16595e-687e-4af7-a65b-bd9a28c49509] Running
	I0920 19:29:25.080327  720494 system_pods.go:61] "kube-scheduler-addons-244316" [f7b9623c-f0ee-4360-8f31-d3cd8cf88969] Running
	I0920 19:29:25.080346  720494 system_pods.go:61] "metrics-server-84c5f94fbc-zn5jl" [5ca001ce-a4b6-4954-bd42-f372e2f387fb] Running
	I0920 19:29:25.080381  720494 system_pods.go:61] "nvidia-device-plugin-daemonset-n79hn" [be19954c-2529-4f25-bd06-6dde36d7e9e8] Running
	I0920 19:29:25.080420  720494 system_pods.go:61] "registry-66c9cd494c-2gc7z" [c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97] Running
	I0920 19:29:25.080425  720494 system_pods.go:61] "registry-proxy-tbwxh" [6bb565a3-2192-4ce8-8582-11f1d9d8ec42] Running
	I0920 19:29:25.080430  720494 system_pods.go:61] "snapshot-controller-56fcc65765-7jw7t" [b10da70d-f5dd-46eb-993d-4973a5ac3e17] Running
	I0920 19:29:25.080456  720494 system_pods.go:61] "snapshot-controller-56fcc65765-xv9vm" [a58d3b2e-0d8e-4062-9b71-a472fa7e2fa8] Running
	I0920 19:29:25.080499  720494 system_pods.go:61] "storage-provisioner" [4ec9c5b1-c429-45cd-bc2c-9563f0f898d3] Running
	I0920 19:29:25.080507  720494 system_pods.go:74] duration metric: took 11.226821637s to wait for pod list to return data ...
	I0920 19:29:25.080520  720494 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:29:25.084074  720494 default_sa.go:45] found service account: "default"
	I0920 19:29:25.084118  720494 default_sa.go:55] duration metric: took 3.588373ms for default service account to be created ...
	I0920 19:29:25.084130  720494 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:29:25.098531  720494 system_pods.go:86] 18 kube-system pods found
	I0920 19:29:25.098691  720494 system_pods.go:89] "coredns-7c65d6cfc9-22l55" [f57f469f-0a10-4755-8ba7-7313badf3e97] Running
	I0920 19:29:25.098718  720494 system_pods.go:89] "csi-hostpath-attacher-0" [ede42a9c-57cd-4862-a473-bb89ae43f460] Running
	I0920 19:29:25.098741  720494 system_pods.go:89] "csi-hostpath-resizer-0" [e16bf395-29bf-4855-9bc2-e53e3fa612e9] Running
	I0920 19:29:25.098764  720494 system_pods.go:89] "csi-hostpathplugin-l9l66" [e3c46cb7-cf62-418b-8b71-c758942cced2] Running
	I0920 19:29:25.098787  720494 system_pods.go:89] "etcd-addons-244316" [c4f43849-20a5-4644-a084-aec2f01202e7] Running
	I0920 19:29:25.098799  720494 system_pods.go:89] "kindnet-62dj5" [0cef216d-8448-40df-9149-c124400377d6] Running
	I0920 19:29:25.098808  720494 system_pods.go:89] "kube-apiserver-addons-244316" [c65c8858-0a0f-424e-8135-ee436e4010d3] Running
	I0920 19:29:25.098814  720494 system_pods.go:89] "kube-controller-manager-addons-244316" [6abd01ee-fed9-4a26-8c01-19cd3b5e4d53] Running
	I0920 19:29:25.098820  720494 system_pods.go:89] "kube-ingress-dns-minikube" [d7af063e-bdd0-4bcb-916b-81ed6229b4e4] Running
	I0920 19:29:25.098824  720494 system_pods.go:89] "kube-proxy-2cdvm" [dc16595e-687e-4af7-a65b-bd9a28c49509] Running
	I0920 19:29:25.098829  720494 system_pods.go:89] "kube-scheduler-addons-244316" [f7b9623c-f0ee-4360-8f31-d3cd8cf88969] Running
	I0920 19:29:25.098833  720494 system_pods.go:89] "metrics-server-84c5f94fbc-zn5jl" [5ca001ce-a4b6-4954-bd42-f372e2f387fb] Running
	I0920 19:29:25.098839  720494 system_pods.go:89] "nvidia-device-plugin-daemonset-n79hn" [be19954c-2529-4f25-bd06-6dde36d7e9e8] Running
	I0920 19:29:25.098847  720494 system_pods.go:89] "registry-66c9cd494c-2gc7z" [c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97] Running
	I0920 19:29:25.098851  720494 system_pods.go:89] "registry-proxy-tbwxh" [6bb565a3-2192-4ce8-8582-11f1d9d8ec42] Running
	I0920 19:29:25.098858  720494 system_pods.go:89] "snapshot-controller-56fcc65765-7jw7t" [b10da70d-f5dd-46eb-993d-4973a5ac3e17] Running
	I0920 19:29:25.098862  720494 system_pods.go:89] "snapshot-controller-56fcc65765-xv9vm" [a58d3b2e-0d8e-4062-9b71-a472fa7e2fa8] Running
	I0920 19:29:25.098869  720494 system_pods.go:89] "storage-provisioner" [4ec9c5b1-c429-45cd-bc2c-9563f0f898d3] Running
	I0920 19:29:25.098878  720494 system_pods.go:126] duration metric: took 14.740845ms to wait for k8s-apps to be running ...
	I0920 19:29:25.098891  720494 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:29:25.098960  720494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:29:25.113514  720494 system_svc.go:56] duration metric: took 14.611289ms WaitForService to wait for kubelet
	I0920 19:29:25.113546  720494 kubeadm.go:582] duration metric: took 3m0.379953199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:29:25.113573  720494 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:29:25.118070  720494 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 19:29:25.118139  720494 node_conditions.go:123] node cpu capacity is 2
	I0920 19:29:25.118151  720494 node_conditions.go:105] duration metric: took 4.571143ms to run NodePressure ...
	I0920 19:29:25.118164  720494 start.go:241] waiting for startup goroutines ...
	I0920 19:29:25.118172  720494 start.go:246] waiting for cluster config update ...
	I0920 19:29:25.118187  720494 start.go:255] writing updated cluster config ...
	I0920 19:29:25.118506  720494 ssh_runner.go:195] Run: rm -f paused
	I0920 19:29:25.476128  720494 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:29:25.479298  720494 out.go:177] * Done! kubectl is now configured to use "addons-244316" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 19:38:39 addons-244316 crio[966]: time="2024-09-20 19:38:39.643734769Z" level=info msg="Removed container a4f92e5fd857b6a03c1a1add3e63fe74dc55058c9d6e968cc80e4f280153d3bc: kube-system/snapshot-controller-56fcc65765-7jw7t/volume-snapshot-controller" id=e2289ab8-aa32-424c-8616-ec41ea6ff8ef name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:38:39 addons-244316 crio[966]: time="2024-09-20 19:38:39.689266125Z" level=info msg="Stopping pod sandbox: 0553057b98795518be794d0c49c1f85187e97f397a79999520b007b24a5dc87d" id=f408296c-d0b5-4de6-a342-b5023f5ab97c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:38:39 addons-244316 crio[966]: time="2024-09-20 19:38:39.690049327Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:0553057b98795518be794d0c49c1f85187e97f397a79999520b007b24a5dc87d UID:5cc49574-83c5-4c15-988a-376020015b23 NetNS:/var/run/netns/9288f1d1-1b60-40da-a3ed-5650e58ad8c2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 20 19:38:39 addons-244316 crio[966]: time="2024-09-20 19:38:39.690195613Z" level=info msg="Deleting pod default_registry-test from CNI network \"kindnet\" (type=ptp)"
	Sep 20 19:38:39 addons-244316 crio[966]: time="2024-09-20 19:38:39.735160089Z" level=info msg="Stopped pod sandbox: 0553057b98795518be794d0c49c1f85187e97f397a79999520b007b24a5dc87d" id=f408296c-d0b5-4de6-a342-b5023f5ab97c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.490895718Z" level=info msg="Stopping container: 8f7e283457e2c36fbb9e4b5c755bfc84af06fe3a46288e92477ffb0cfff0d373 (timeout: 30s)" id=543b5f36-325f-4b59-9e51-3a8264d52e68 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:38:40 addons-244316 conmon[4062]: conmon 8f7e283457e2c36fbb9e <ninfo>: container 4073 exited with status 2
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.539602944Z" level=info msg="Stopping container: f365acf803bc943478798e046ece4befadccc56572b4623ea2d0731be73362ee (timeout: 30s)" id=a1c72065-e68d-4ea0-966e-bea4274822a8 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.701294906Z" level=info msg="Stopped container 8f7e283457e2c36fbb9e4b5c755bfc84af06fe3a46288e92477ffb0cfff0d373: kube-system/registry-66c9cd494c-2gc7z/registry" id=543b5f36-325f-4b59-9e51-3a8264d52e68 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.702056973Z" level=info msg="Stopping pod sandbox: 6c16bd80803a255322600d960e993afe9e3bd4152adf7e751a3e573aee60aa7b" id=12f73c98-465c-44a2-891f-2bd62fe6bffd name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.702397420Z" level=info msg="Got pod network &{Name:registry-66c9cd494c-2gc7z Namespace:kube-system ID:6c16bd80803a255322600d960e993afe9e3bd4152adf7e751a3e573aee60aa7b UID:c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97 NetNS:/var/run/netns/1e0cccaa-35bf-41c5-8642-c84e0e5d4fd4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.702543690Z" level=info msg="Deleting pod kube-system_registry-66c9cd494c-2gc7z from CNI network \"kindnet\" (type=ptp)"
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.733663667Z" level=info msg="Stopped container f365acf803bc943478798e046ece4befadccc56572b4623ea2d0731be73362ee: kube-system/registry-proxy-tbwxh/registry-proxy" id=a1c72065-e68d-4ea0-966e-bea4274822a8 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.734071337Z" level=info msg="Stopping pod sandbox: ab1f7156c33277dceb1d3a0d642088af8e1e11549bfb6ca2a6780f533363ab40" id=4c70fad1-dee9-4371-a812-d163f056893b name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.745534328Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-JGC2RK7A4RNXBCJL - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-LSBWRHWPJ7LTVYYC - [0:0]\n:KUBE-HP-QFR5SD3WR7G6LIXN - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vstxr_ingress-nginx_535fbb72-dca5-47eb-9ccd-0e05fd541f07_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-QFR5SD3WR7G6LIXN\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vstxr_ingress-nginx_535fbb72-dca5-47eb-9ccd-0e05fd541f07_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-JGC2RK7A4RNXBCJL\n-A KUBE-HP-JGC2RK7A4RNXBCJL -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vstxr_ingress-nginx_535fbb72-dca5-47eb-9ccd-0e05fd541f07_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-JGC2RK7A4RNXBCJL -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vstxr_ingress-nginx_535fbb72-dca5-47eb-9cc
d-0e05fd541f07_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.19:80\n-A KUBE-HP-QFR5SD3WR7G6LIXN -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vstxr_ingress-nginx_535fbb72-dca5-47eb-9ccd-0e05fd541f07_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-QFR5SD3WR7G6LIXN -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vstxr_ingress-nginx_535fbb72-dca5-47eb-9ccd-0e05fd541f07_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.19:443\n-X KUBE-HP-LSBWRHWPJ7LTVYYC\nCOMMIT\n"
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.758046879Z" level=info msg="Stopped pod sandbox: 6c16bd80803a255322600d960e993afe9e3bd4152adf7e751a3e573aee60aa7b" id=12f73c98-465c-44a2-891f-2bd62fe6bffd name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.768107695Z" level=info msg="Closing host port tcp:5000"
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.771497467Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.771691178Z" level=info msg="Got pod network &{Name:registry-proxy-tbwxh Namespace:kube-system ID:ab1f7156c33277dceb1d3a0d642088af8e1e11549bfb6ca2a6780f533363ab40 UID:6bb565a3-2192-4ce8-8582-11f1d9d8ec42 NetNS:/var/run/netns/f5052134-895f-478a-b592-4456c17b9226 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.771836307Z" level=info msg="Deleting pod kube-system_registry-proxy-tbwxh from CNI network \"kindnet\" (type=ptp)"
	Sep 20 19:38:40 addons-244316 crio[966]: time="2024-09-20 19:38:40.810931313Z" level=info msg="Stopped pod sandbox: ab1f7156c33277dceb1d3a0d642088af8e1e11549bfb6ca2a6780f533363ab40" id=4c70fad1-dee9-4371-a812-d163f056893b name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:38:41 addons-244316 crio[966]: time="2024-09-20 19:38:41.635712861Z" level=info msg="Removing container: 8f7e283457e2c36fbb9e4b5c755bfc84af06fe3a46288e92477ffb0cfff0d373" id=00a9bda5-95f7-4a10-969d-b59674fdbb90 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:38:41 addons-244316 crio[966]: time="2024-09-20 19:38:41.653072029Z" level=info msg="Removed container 8f7e283457e2c36fbb9e4b5c755bfc84af06fe3a46288e92477ffb0cfff0d373: kube-system/registry-66c9cd494c-2gc7z/registry" id=00a9bda5-95f7-4a10-969d-b59674fdbb90 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:38:41 addons-244316 crio[966]: time="2024-09-20 19:38:41.657812852Z" level=info msg="Removing container: f365acf803bc943478798e046ece4befadccc56572b4623ea2d0731be73362ee" id=e2b8a6b7-ef8e-4065-bfc0-dce0add786a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:38:41 addons-244316 crio[966]: time="2024-09-20 19:38:41.692574091Z" level=info msg="Removed container f365acf803bc943478798e046ece4befadccc56572b4623ea2d0731be73362ee: kube-system/registry-proxy-tbwxh/registry-proxy" id=e2b8a6b7-ef8e-4065-bfc0-dce0add786a2 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	c69792791cbbd       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            29 seconds ago      Exited              gadget                     7                   05887ef3b3f12       gadget-kj4k6
	727a799cbc3b5       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             10 minutes ago      Running             controller                 0                   865406ed79da6       ingress-nginx-controller-bc57996ff-vstxr
	d315e9086557b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 10 minutes ago      Running             gcp-auth                   0                   c6f74f4e64606       gcp-auth-89d5ffd79-d2tpp
	37386e7680939       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             10 minutes ago      Running             local-path-provisioner     0                   cc709ac8a6230       local-path-provisioner-86d989889c-fzcjl
	fad6a3dc1df6d       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                     10 minutes ago      Running             nvidia-device-plugin-ctr   0                   0e878388421b6       nvidia-device-plugin-daemonset-n79hn
	fe7b8006fe5da       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        10 minutes ago      Running             metrics-server             0                   22cef3edf18b3       metrics-server-84c5f94fbc-zn5jl
	c9915246bb266       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              11 minutes ago      Running             yakd                       0                   ccb50425f1dc8       yakd-dashboard-67d98fc6b-pb547
	a616675d627b4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   11 minutes ago      Exited              patch                      0                   8491d3e8473d7       ingress-nginx-admission-patch-r8q44
	db8907b6eab1c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   11 minutes ago      Exited              create                     0                   c9c4141d93287       ingress-nginx-admission-create-tpm65
	22be32730e544       gcr.io/cloud-spanner-emulator/emulator@sha256:41ec188288c7943f488600462b2b74002814e52439be82d15de33c3ee4898a58               11 minutes ago      Running             cloud-spanner-emulator     0                   7ac06d97a266b       cloud-spanner-emulator-769b77f747-dp6lg
	0712afd630c50       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             11 minutes ago      Running             minikube-ingress-dns       0                   889ee343b1c15       kube-ingress-dns-minikube
	c524ed738a8d3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             11 minutes ago      Running             storage-provisioner        0                   780e530cacd2f       storage-provisioner
	057fc4f7aad90       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             11 minutes ago      Running             coredns                    0                   19fae96941a4c       coredns-7c65d6cfc9-22l55
	f693c5f3d507b       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             12 minutes ago      Running             kube-proxy                 0                   cc62ba102a745       kube-proxy-2cdvm
	4321d12c79ddf       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             12 minutes ago      Running             kindnet-cni                0                   b90c147beb0ad       kindnet-62dj5
	4d724338eea34       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             12 minutes ago      Running             kube-scheduler             0                   05db024319aa0       kube-scheduler-addons-244316
	be05ccc3ccb37       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             12 minutes ago      Running             kube-controller-manager    0                   6b477bdf2c558       kube-controller-manager-addons-244316
	7df0e0b9e62ff       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             12 minutes ago      Running             kube-apiserver             0                   ac32244a5406b       kube-apiserver-addons-244316
	a6f3359b2e88b       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             12 minutes ago      Running             etcd                       0                   1c0ae6d7145c8       etcd-addons-244316
	
	
	==> coredns [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78] <==
	[INFO] 10.244.0.15:59114 - 56514 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104129s
	[INFO] 10.244.0.15:50932 - 11460 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002666137s
	[INFO] 10.244.0.15:50932 - 39626 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002906394s
	[INFO] 10.244.0.15:33525 - 33323 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000546449s
	[INFO] 10.244.0.15:33525 - 45348 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000577833s
	[INFO] 10.244.0.15:59699 - 23607 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00012291s
	[INFO] 10.244.0.15:59699 - 54075 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000179384s
	[INFO] 10.244.0.15:32831 - 28558 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072893s
	[INFO] 10.244.0.15:32831 - 18096 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000140543s
	[INFO] 10.244.0.15:45505 - 40088 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101889s
	[INFO] 10.244.0.15:45505 - 32415 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000152037s
	[INFO] 10.244.0.15:34547 - 57598 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001603466s
	[INFO] 10.244.0.15:34547 - 49347 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001676244s
	[INFO] 10.244.0.15:39827 - 28188 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076618s
	[INFO] 10.244.0.15:39827 - 45592 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000050157s
	[INFO] 10.244.0.20:47707 - 16074 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002686658s
	[INFO] 10.244.0.20:46427 - 23800 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00268235s
	[INFO] 10.244.0.20:57231 - 19877 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157937s
	[INFO] 10.244.0.20:45688 - 62216 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000097565s
	[INFO] 10.244.0.20:33274 - 51885 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125626s
	[INFO] 10.244.0.20:49302 - 18918 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092175s
	[INFO] 10.244.0.20:49895 - 24635 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00244512s
	[INFO] 10.244.0.20:44018 - 55406 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002061548s
	[INFO] 10.244.0.20:38373 - 29636 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000956351s
	[INFO] 10.244.0.20:33201 - 4012 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000749012s
	
	
	==> describe nodes <==
	Name:               addons-244316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-244316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-244316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T19_26_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-244316
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:26:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-244316
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:38:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:38:24 +0000   Fri, 20 Sep 2024 19:26:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:38:24 +0000   Fri, 20 Sep 2024 19:26:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:38:24 +0000   Fri, 20 Sep 2024 19:26:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:38:24 +0000   Fri, 20 Sep 2024 19:27:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-244316
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 545b19fe9bdc45b392d49f2b91832698
	  System UUID:                ef4c1a4b-0c08-44ed-8fa8-b5206cbb0701
	  Boot ID:                    7d682649-b07c-44b5-a0a6-3c50df538ea4
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-769b77f747-dp6lg     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-kj4k6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-d2tpp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vstxr    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-22l55                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-244316                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-62dj5                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-addons-244316                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-244316       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-2cdvm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-244316                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-zn5jl             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-n79hn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-fzcjl     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-pb547              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-244316 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-244316 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-244316 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-244316 event: Registered Node addons-244316 in Controller
	  Normal   NodeReady                11m   kubelet          Node addons-244316 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 18:56] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	[Sep20 19:09] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	[Sep20 19:16] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	
	
	==> etcd [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56] <==
	{"level":"info","ts":"2024-09-20T19:26:15.493046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-20T19:26:15.493083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T19:26:15.496875Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-244316 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T19:26:15.496978Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:26:15.497337Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:26:15.498258Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:26:15.499377Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T19:26:15.499863Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:26:15.499980Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:26:15.500310Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:26:15.500373Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:26:15.505603Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:26:15.506814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T19:26:15.513007Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:26:15.513100Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T19:26:26.241888Z","caller":"traceutil/trace.go:171","msg":"trace[2026199184] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"100.101501ms","start":"2024-09-20T19:26:26.141768Z","end":"2024-09-20T19:26:26.241870Z","steps":["trace[2026199184] 'process raft request'  (duration: 39.701056ms)","trace[2026199184] 'compare'  (duration: 60.307606ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T19:26:27.471528Z","caller":"traceutil/trace.go:171","msg":"trace[1882640504] transaction","detail":"{read_only:false; response_revision:313; number_of_response:1; }","duration":"106.638225ms","start":"2024-09-20T19:26:27.364872Z","end":"2024-09-20T19:26:27.471511Z","steps":["trace[1882640504] 'process raft request'  (duration: 59.29789ms)","trace[1882640504] 'compare'  (duration: 46.971064ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T19:26:27.526046Z","caller":"traceutil/trace.go:171","msg":"trace[973402408] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"107.82845ms","start":"2024-09-20T19:26:27.418197Z","end":"2024-09-20T19:26:27.526026Z","steps":["trace[973402408] 'process raft request'  (duration: 53.066249ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:26:27.571075Z","caller":"traceutil/trace.go:171","msg":"trace[2112418004] transaction","detail":"{read_only:false; response_revision:315; number_of_response:1; }","duration":"122.751557ms","start":"2024-09-20T19:26:27.448305Z","end":"2024-09-20T19:26:27.571056Z","steps":["trace[2112418004] 'process raft request'  (duration: 119.526598ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:26:28.500251Z","caller":"traceutil/trace.go:171","msg":"trace[95206656] transaction","detail":"{read_only:false; response_revision:323; number_of_response:1; }","duration":"174.74035ms","start":"2024-09-20T19:26:28.325306Z","end":"2024-09-20T19:26:28.500046Z","steps":["trace[95206656] 'process raft request'  (duration: 120.086848ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:26:28.701213Z","caller":"traceutil/trace.go:171","msg":"trace[792115432] transaction","detail":"{read_only:false; response_revision:324; number_of_response:1; }","duration":"135.97743ms","start":"2024-09-20T19:26:28.565220Z","end":"2024-09-20T19:26:28.701197Z","steps":["trace[792115432] 'process raft request'  (duration: 135.866171ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:26:28.721747Z","caller":"traceutil/trace.go:171","msg":"trace[634577408] transaction","detail":"{read_only:false; response_revision:325; number_of_response:1; }","duration":"148.304093ms","start":"2024-09-20T19:26:28.573386Z","end":"2024-09-20T19:26:28.721690Z","steps":["trace[634577408] 'process raft request'  (duration: 147.53831ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:36:15.774535Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1499}
	{"level":"info","ts":"2024-09-20T19:36:15.811553Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1499,"took":"36.293092ms","hash":1427089083,"current-db-size-bytes":6217728,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3289088,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-20T19:36:15.811629Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1427089083,"revision":1499,"compact-revision":-1}
	
	
	==> gcp-auth [d315e9086557bcb438ba82c9c8029a5fa6eb5ca36d005581c58a6149197ccc08] <==
	2024/09/20 19:28:07 GCP Auth Webhook started!
	2024/09/20 19:29:25 Ready to marshal response ...
	2024/09/20 19:29:25 Ready to write response ...
	2024/09/20 19:29:25 Ready to marshal response ...
	2024/09/20 19:29:25 Ready to write response ...
	2024/09/20 19:29:25 Ready to marshal response ...
	2024/09/20 19:29:25 Ready to write response ...
	2024/09/20 19:37:30 Ready to marshal response ...
	2024/09/20 19:37:30 Ready to write response ...
	2024/09/20 19:37:30 Ready to marshal response ...
	2024/09/20 19:37:30 Ready to write response ...
	2024/09/20 19:37:30 Ready to marshal response ...
	2024/09/20 19:37:30 Ready to write response ...
	2024/09/20 19:37:39 Ready to marshal response ...
	2024/09/20 19:37:39 Ready to write response ...
	2024/09/20 19:38:07 Ready to marshal response ...
	2024/09/20 19:38:07 Ready to write response ...
	2024/09/20 19:38:22 Ready to marshal response ...
	2024/09/20 19:38:22 Ready to write response ...
	
	
	==> kernel <==
	 19:38:42 up  3:21,  0 users,  load average: 0.73, 0.80, 1.81
	Linux addons-244316 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1] <==
	I0920 19:36:40.065780       1 main.go:299] handling current node
	I0920 19:36:50.054197       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:36:50.054259       1 main.go:299] handling current node
	I0920 19:37:00.057525       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:37:00.057558       1 main.go:299] handling current node
	I0920 19:37:10.052653       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:37:10.052718       1 main.go:299] handling current node
	I0920 19:37:20.052215       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:37:20.052254       1 main.go:299] handling current node
	I0920 19:37:30.050706       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:37:30.050854       1 main.go:299] handling current node
	I0920 19:37:40.124855       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:37:40.124891       1 main.go:299] handling current node
	I0920 19:37:50.050603       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:37:50.050651       1 main.go:299] handling current node
	I0920 19:38:00.049970       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:38:00.050064       1 main.go:299] handling current node
	I0920 19:38:10.050421       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:38:10.050456       1 main.go:299] handling current node
	I0920 19:38:20.050124       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:38:20.050158       1 main.go:299] handling current node
	I0920 19:38:30.051222       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:38:30.051448       1 main.go:299] handling current node
	I0920 19:38:40.050282       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:38:40.050454       1 main.go:299] handling current node
	
	
	==> kube-apiserver [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb] <==
	E0920 19:28:02.787541       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="10.877794ms" method="GET" path="/apis/apps/v1/namespaces/yakd-dashboard/replicasets/yakd-dashboard-67d98fc6b" result=null
	W0920 19:28:51.045730       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 19:28:51.045950       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 19:28:51.046775       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.3.33:443: connect: connection refused" logger="UnhandledError"
	E0920 19:28:51.048849       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.3.33:443: connect: connection refused" logger="UnhandledError"
	E0920 19:28:51.053974       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.3.33:443: connect: connection refused" logger="UnhandledError"
	I0920 19:28:51.140284       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 19:37:30.422760       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.183.123"}
	I0920 19:38:18.972262       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 19:38:38.722956       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.723050       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:38:38.790058       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.790113       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:38:38.821684       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.822017       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:38:38.825093       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.825209       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:38:38.857263       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.857387       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 19:38:39.823647       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 19:38:39.858807       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 19:38:39.873190       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8] <==
	I0920 19:37:30.532784       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="45.86244ms"
	I0920 19:37:30.551805       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="18.958499ms"
	I0920 19:37:30.552015       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="71.958µs"
	I0920 19:37:30.565318       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="105.097µs"
	I0920 19:37:35.256269       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="51.396µs"
	I0920 19:37:35.293612       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="13.635109ms"
	I0920 19:37:35.293825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="80.293µs"
	I0920 19:37:42.106644       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="4.882µs"
	I0920 19:37:52.341199       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0920 19:37:53.612372       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-244316"
	I0920 19:38:24.188227       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-244316"
	I0920 19:38:32.024408       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0920 19:38:32.203317       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0920 19:38:33.044929       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-244316"
	I0920 19:38:38.899814       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="14.924µs"
	E0920 19:38:39.826143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0920 19:38:39.860313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0920 19:38:39.875576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 19:38:40.468122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.268µs"
	W0920 19:38:40.659563       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:38:40.659614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:38:40.874209       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:38:40.874254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:38:41.227229       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:38:41.227367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba] <==
	I0920 19:26:30.694959       1 server_linux.go:66] "Using iptables proxy"
	I0920 19:26:30.941347       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 19:26:30.941501       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:26:31.113765       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 19:26:31.114389       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:26:31.247312       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:26:31.248577       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:26:31.248685       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:26:31.285254       1 config.go:199] "Starting service config controller"
	I0920 19:26:31.285292       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:26:31.285317       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:26:31.285321       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:26:31.285701       1 config.go:328] "Starting node config controller"
	I0920 19:26:31.285721       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:26:31.386427       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:26:31.388471       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:26:31.386092       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232] <==
	W0920 19:26:17.922834       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 19:26:17.922851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:17.922893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 19:26:17.922937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:17.922953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0920 19:26:17.923002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 19:26:17.923021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 19:26:17.923044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:17.922914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 19:26:17.923121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:17.923082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 19:26:17.923221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:18.745705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 19:26:18.745830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:18.753046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 19:26:18.753087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:18.822086       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 19:26:18.822126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:18.825544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:26:18.825670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:19.036881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 19:26:19.036999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:19.047421       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 19:26:19.047533       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 19:26:21.817013       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:38:39 addons-244316 kubelet[1514]: I0920 19:38:39.915080    1514 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5cc49574-83c5-4c15-988a-376020015b23-gcp-creds\") on node \"addons-244316\" DevicePath \"\""
	Sep 20 19:38:40 addons-244316 kubelet[1514]: I0920 19:38:40.571617    1514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a58d3b2e-0d8e-4062-9b71-a472fa7e2fa8" path="/var/lib/kubelet/pods/a58d3b2e-0d8e-4062-9b71-a472fa7e2fa8/volumes"
	Sep 20 19:38:40 addons-244316 kubelet[1514]: I0920 19:38:40.572052    1514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b10da70d-f5dd-46eb-993d-4973a5ac3e17" path="/var/lib/kubelet/pods/b10da70d-f5dd-46eb-993d-4973a5ac3e17/volumes"
	Sep 20 19:38:40 addons-244316 kubelet[1514]: I0920 19:38:40.827886    1514 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4pcp\" (UniqueName: \"kubernetes.io/projected/c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97-kube-api-access-r4pcp\") pod \"c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97\" (UID: \"c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97\") "
	Sep 20 19:38:40 addons-244316 kubelet[1514]: I0920 19:38:40.834625    1514 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97-kube-api-access-r4pcp" (OuterVolumeSpecName: "kube-api-access-r4pcp") pod "c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97" (UID: "c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97"). InnerVolumeSpecName "kube-api-access-r4pcp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 19:38:40 addons-244316 kubelet[1514]: E0920 19:38:40.856511    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861120856274161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:503197,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:38:40 addons-244316 kubelet[1514]: E0920 19:38:40.856543    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861120856274161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:503197,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:38:40 addons-244316 kubelet[1514]: I0920 19:38:40.928641    1514 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c424d\" (UniqueName: \"kubernetes.io/projected/6bb565a3-2192-4ce8-8582-11f1d9d8ec42-kube-api-access-c424d\") pod \"6bb565a3-2192-4ce8-8582-11f1d9d8ec42\" (UID: \"6bb565a3-2192-4ce8-8582-11f1d9d8ec42\") "
	Sep 20 19:38:40 addons-244316 kubelet[1514]: I0920 19:38:40.929528    1514 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r4pcp\" (UniqueName: \"kubernetes.io/projected/c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97-kube-api-access-r4pcp\") on node \"addons-244316\" DevicePath \"\""
	Sep 20 19:38:40 addons-244316 kubelet[1514]: I0920 19:38:40.931207    1514 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb565a3-2192-4ce8-8582-11f1d9d8ec42-kube-api-access-c424d" (OuterVolumeSpecName: "kube-api-access-c424d") pod "6bb565a3-2192-4ce8-8582-11f1d9d8ec42" (UID: "6bb565a3-2192-4ce8-8582-11f1d9d8ec42"). InnerVolumeSpecName "kube-api-access-c424d". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 19:38:41 addons-244316 kubelet[1514]: I0920 19:38:41.030515    1514 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-c424d\" (UniqueName: \"kubernetes.io/projected/6bb565a3-2192-4ce8-8582-11f1d9d8ec42-kube-api-access-c424d\") on node \"addons-244316\" DevicePath \"\""
	Sep 20 19:38:41 addons-244316 kubelet[1514]: I0920 19:38:41.631002    1514 scope.go:117] "RemoveContainer" containerID="8f7e283457e2c36fbb9e4b5c755bfc84af06fe3a46288e92477ffb0cfff0d373"
	Sep 20 19:38:41 addons-244316 kubelet[1514]: I0920 19:38:41.653477    1514 scope.go:117] "RemoveContainer" containerID="8f7e283457e2c36fbb9e4b5c755bfc84af06fe3a46288e92477ffb0cfff0d373"
	Sep 20 19:38:41 addons-244316 kubelet[1514]: E0920 19:38:41.655894    1514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f7e283457e2c36fbb9e4b5c755bfc84af06fe3a46288e92477ffb0cfff0d373\": container with ID starting with 8f7e283457e2c36fbb9e4b5c755bfc84af06fe3a46288e92477ffb0cfff0d373 not found: ID does not exist" containerID="8f7e283457e2c36fbb9e4b5c755bfc84af06fe3a46288e92477ffb0cfff0d373"
	Sep 20 19:38:41 addons-244316 kubelet[1514]: I0920 19:38:41.655932    1514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f7e283457e2c36fbb9e4b5c755bfc84af06fe3a46288e92477ffb0cfff0d373"} err="failed to get container status \"8f7e283457e2c36fbb9e4b5c755bfc84af06fe3a46288e92477ffb0cfff0d373\": rpc error: code = NotFound desc = could not find container \"8f7e283457e2c36fbb9e4b5c755bfc84af06fe3a46288e92477ffb0cfff0d373\": container with ID starting with 8f7e283457e2c36fbb9e4b5c755bfc84af06fe3a46288e92477ffb0cfff0d373 not found: ID does not exist"
	Sep 20 19:38:41 addons-244316 kubelet[1514]: I0920 19:38:41.655961    1514 scope.go:117] "RemoveContainer" containerID="f365acf803bc943478798e046ece4befadccc56572b4623ea2d0731be73362ee"
	Sep 20 19:38:41 addons-244316 kubelet[1514]: I0920 19:38:41.693947    1514 scope.go:117] "RemoveContainer" containerID="f365acf803bc943478798e046ece4befadccc56572b4623ea2d0731be73362ee"
	Sep 20 19:38:41 addons-244316 kubelet[1514]: E0920 19:38:41.694812    1514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f365acf803bc943478798e046ece4befadccc56572b4623ea2d0731be73362ee\": container with ID starting with f365acf803bc943478798e046ece4befadccc56572b4623ea2d0731be73362ee not found: ID does not exist" containerID="f365acf803bc943478798e046ece4befadccc56572b4623ea2d0731be73362ee"
	Sep 20 19:38:41 addons-244316 kubelet[1514]: I0920 19:38:41.694852    1514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f365acf803bc943478798e046ece4befadccc56572b4623ea2d0731be73362ee"} err="failed to get container status \"f365acf803bc943478798e046ece4befadccc56572b4623ea2d0731be73362ee\": rpc error: code = NotFound desc = could not find container \"f365acf803bc943478798e046ece4befadccc56572b4623ea2d0731be73362ee\": container with ID starting with f365acf803bc943478798e046ece4befadccc56572b4623ea2d0731be73362ee not found: ID does not exist"
	Sep 20 19:38:42 addons-244316 kubelet[1514]: I0920 19:38:42.568500    1514 scope.go:117] "RemoveContainer" containerID="c69792791cbbd97f6d3f3a4da79f3d926456d761e2d3d9822c75d982f3fe6d12"
	Sep 20 19:38:42 addons-244316 kubelet[1514]: E0920 19:38:42.568735    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-kj4k6_gadget(8ca31dab-8797-4373-93ea-3d69e3e917d1)\"" pod="gadget/gadget-kj4k6" podUID="8ca31dab-8797-4373-93ea-3d69e3e917d1"
	Sep 20 19:38:42 addons-244316 kubelet[1514]: E0920 19:38:42.570233    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="26ebf772-d5b9-4d72-93d5-706cab403777"
	Sep 20 19:38:42 addons-244316 kubelet[1514]: I0920 19:38:42.571274    1514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cc49574-83c5-4c15-988a-376020015b23" path="/var/lib/kubelet/pods/5cc49574-83c5-4c15-988a-376020015b23/volumes"
	Sep 20 19:38:42 addons-244316 kubelet[1514]: I0920 19:38:42.571556    1514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bb565a3-2192-4ce8-8582-11f1d9d8ec42" path="/var/lib/kubelet/pods/6bb565a3-2192-4ce8-8582-11f1d9d8ec42/volumes"
	Sep 20 19:38:42 addons-244316 kubelet[1514]: I0920 19:38:42.572040    1514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97" path="/var/lib/kubelet/pods/c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97/volumes"
	
	
	==> storage-provisioner [c524ed738a8d38b9f6bd037c1dc8d7fef60bc2f2cd8fb0f684e4eb386bf75f67] <==
	I0920 19:27:11.555337       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 19:27:11.604952       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 19:27:11.605114       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 19:27:11.637195       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 19:27:11.637419       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-244316_90ec2255-c73c-4224-95cd-667ebf7eeaa4!
	I0920 19:27:11.637476       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"064fe9f7-ba2a-47d4-ac4c-01438c7426a0", APIVersion:"v1", ResourceVersion:"888", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-244316_90ec2255-c73c-4224-95cd-667ebf7eeaa4 became leader
	I0920 19:27:11.737871       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-244316_90ec2255-c73c-4224-95cd-667ebf7eeaa4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-244316 -n addons-244316
helpers_test.go:261: (dbg) Run:  kubectl --context addons-244316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-tpm65 ingress-nginx-admission-patch-r8q44
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-244316 describe pod busybox ingress-nginx-admission-create-tpm65 ingress-nginx-admission-patch-r8q44
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-244316 describe pod busybox ingress-nginx-admission-create-tpm65 ingress-nginx-admission-patch-r8q44: exit status 1 (95.519036ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-244316/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 19:29:25 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x65mx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x65mx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-244316
	  Normal   Pulling    7m57s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m57s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m57s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m28s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m15s (x20 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tpm65" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-r8q44" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-244316 describe pod busybox ingress-nginx-admission-create-tpm65 ingress-nginx-admission-patch-r8q44: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.07s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (153.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-244316 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-244316 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-244316 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9d4cae2d-0d7d-416d-ab92-5ccba65215bc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9d4cae2d-0d7d-416d-ab92-5ccba65215bc] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004201515s
I0920 19:39:04.053903  719734 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-244316 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.93377394s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-244316 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-244316 addons disable ingress-dns --alsologtostderr -v=1: (1.253449131s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-244316 addons disable ingress --alsologtostderr -v=1: (7.830921566s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-244316
helpers_test.go:235: (dbg) docker inspect addons-244316:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7",
	        "Created": "2024-09-20T19:25:55.126788858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 720989,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T19:25:55.300608812Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7/hosts",
	        "LogPath": "/var/lib/docker/containers/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7-json.log",
	        "Name": "/addons-244316",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-244316:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-244316",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/105335214c4d900a78658ce80448d8e1b3a6ae42f7a4bc31c9c402b03cc84f4b-init/diff:/var/lib/docker/overlay2/abb52e4f5a7bf897f28cf92e83fcbaaa3eeab65622f14fe44da11027a9deb44f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/105335214c4d900a78658ce80448d8e1b3a6ae42f7a4bc31c9c402b03cc84f4b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/105335214c4d900a78658ce80448d8e1b3a6ae42f7a4bc31c9c402b03cc84f4b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/105335214c4d900a78658ce80448d8e1b3a6ae42f7a4bc31c9c402b03cc84f4b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-244316",
	                "Source": "/var/lib/docker/volumes/addons-244316/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-244316",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-244316",
	                "name.minikube.sigs.k8s.io": "addons-244316",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f3d1276f3986829b7ef05a9018d68f3626ebc86f1f53155e972dab26ef3188f",
	            "SandboxKey": "/var/run/docker/netns/3f3d1276f398",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-244316": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8bb19f13f00a01d1da94938835d45e58571681a0667d77334eb4d48ebd8f6ef5",
	                    "EndpointID": "84f0a8ea26206e832205d2bb50a56b3db3dc2ad8c485969f2e47f1627577b1a0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-244316",
	                        "3d82610f1fe4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-244316 -n addons-244316
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-244316 logs -n 25: (1.568312913s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| delete  | -p download-only-533694              | download-only-533694   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| start   | -o=json --download-only              | download-only-484642   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | -p download-only-484642              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| delete  | -p download-only-484642              | download-only-484642   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| delete  | -p download-only-533694              | download-only-533694   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| delete  | -p download-only-484642              | download-only-484642   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| start   | --download-only -p                   | download-docker-394536 | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | download-docker-394536               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-394536            | download-docker-394536 | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| start   | --download-only -p                   | binary-mirror-387387   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | binary-mirror-387387                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34931               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-387387              | binary-mirror-387387   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| addons  | enable dashboard -p                  | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | addons-244316                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | addons-244316                        |                        |         |         |                     |                     |
	| start   | -p addons-244316 --wait=true         | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:29 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:37 UTC | 20 Sep 24 19:37 UTC |
	|         | -p addons-244316                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-244316 addons disable         | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:37 UTC | 20 Sep 24 19:37 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-244316 addons                 | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-244316 addons                 | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-244316 ip                     | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	| addons  | addons-244316 addons disable         | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	|         | addons-244316                        |                        |         |         |                     |                     |
	| ssh     | addons-244316 ssh curl -s            | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:39 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-244316 ip                     | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:41 UTC | 20 Sep 24 19:41 UTC |
	| addons  | addons-244316 addons disable         | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:41 UTC | 20 Sep 24 19:41 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-244316 addons disable         | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:41 UTC | 20 Sep 24 19:41 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:25:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:25:29.773517  720494 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:25:29.773681  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:29.773717  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:25:29.773723  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:29.774046  720494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
	I0920 19:25:29.774682  720494 out.go:352] Setting JSON to false
	I0920 19:25:29.775868  720494 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":11279,"bootTime":1726849051,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 19:25:29.775943  720494 start.go:139] virtualization:  
	I0920 19:25:29.779178  720494 out.go:177] * [addons-244316] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:25:29.782484  720494 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:25:29.782599  720494 notify.go:220] Checking for updates...
	I0920 19:25:29.787949  720494 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:25:29.791244  720494 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:25:29.793888  720494 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	I0920 19:25:29.796579  720494 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:25:29.799156  720494 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:25:29.802100  720494 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:25:29.830398  720494 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:25:29.830533  720494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:25:29.884753  720494 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:25:29.875307304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:25:29.884872  720494 docker.go:318] overlay module found
	I0920 19:25:29.887812  720494 out.go:177] * Using the docker driver based on user configuration
	I0920 19:25:29.890512  720494 start.go:297] selected driver: docker
	I0920 19:25:29.890532  720494 start.go:901] validating driver "docker" against <nil>
	I0920 19:25:29.890547  720494 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:25:29.891202  720494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:25:29.946608  720494 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:25:29.93724064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:25:29.946823  720494 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 19:25:29.947062  720494 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:25:29.949812  720494 out.go:177] * Using Docker driver with root privileges
	I0920 19:25:29.952570  720494 cni.go:84] Creating CNI manager for ""
	I0920 19:25:29.952644  720494 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 19:25:29.952660  720494 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 19:25:29.952801  720494 start.go:340] cluster config:
	{Name:addons-244316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:25:29.957445  720494 out.go:177] * Starting "addons-244316" primary control-plane node in "addons-244316" cluster
	I0920 19:25:29.960190  720494 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 19:25:29.963127  720494 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:25:29.965720  720494 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 19:25:29.965816  720494 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:29.965854  720494 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0920 19:25:29.965880  720494 cache.go:56] Caching tarball of preloaded images
	I0920 19:25:29.965965  720494 preload.go:172] Found /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0920 19:25:29.965980  720494 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 19:25:29.966344  720494 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/config.json ...
	I0920 19:25:29.966373  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/config.json: {Name:mk6955f082c6754495d7aaba1d3a3077fbb595bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:29.982114  720494 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:25:29.982227  720494 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:25:29.982252  720494 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 19:25:29.982261  720494 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 19:25:29.982269  720494 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 19:25:29.982275  720494 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 19:25:47.951558  720494 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 19:25:47.951596  720494 cache.go:194] Successfully downloaded all kic artifacts
	I0920 19:25:47.951647  720494 start.go:360] acquireMachinesLock for addons-244316: {Name:mk0522c0afca04ad0b8b7308c1947c33a5b75632 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:25:47.951772  720494 start.go:364] duration metric: took 100.896µs to acquireMachinesLock for "addons-244316"
	I0920 19:25:47.951805  720494 start.go:93] Provisioning new machine with config: &{Name:addons-244316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:25:47.951885  720494 start.go:125] createHost starting for "" (driver="docker")
	I0920 19:25:47.953438  720494 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 19:25:47.953715  720494 start.go:159] libmachine.API.Create for "addons-244316" (driver="docker")
	I0920 19:25:47.953752  720494 client.go:168] LocalClient.Create starting
	I0920 19:25:47.953877  720494 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem
	I0920 19:25:48.990003  720494 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem
	I0920 19:25:49.508284  720494 cli_runner.go:164] Run: docker network inspect addons-244316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 19:25:49.524511  720494 cli_runner.go:211] docker network inspect addons-244316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 19:25:49.524596  720494 network_create.go:284] running [docker network inspect addons-244316] to gather additional debugging logs...
	I0920 19:25:49.524617  720494 cli_runner.go:164] Run: docker network inspect addons-244316
	W0920 19:25:49.541131  720494 cli_runner.go:211] docker network inspect addons-244316 returned with exit code 1
	I0920 19:25:49.541164  720494 network_create.go:287] error running [docker network inspect addons-244316]: docker network inspect addons-244316: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-244316 not found
	I0920 19:25:49.541203  720494 network_create.go:289] output of [docker network inspect addons-244316]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-244316 not found
	
	** /stderr **
	I0920 19:25:49.541314  720494 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:25:49.555672  720494 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004c8400}
	I0920 19:25:49.555719  720494 network_create.go:124] attempt to create docker network addons-244316 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 19:25:49.555776  720494 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-244316 addons-244316
	I0920 19:25:49.624569  720494 network_create.go:108] docker network addons-244316 192.168.49.0/24 created
	I0920 19:25:49.624607  720494 kic.go:121] calculated static IP "192.168.49.2" for the "addons-244316" container
	I0920 19:25:49.624710  720494 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 19:25:49.638281  720494 cli_runner.go:164] Run: docker volume create addons-244316 --label name.minikube.sigs.k8s.io=addons-244316 --label created_by.minikube.sigs.k8s.io=true
	I0920 19:25:49.656152  720494 oci.go:103] Successfully created a docker volume addons-244316
	I0920 19:25:49.656249  720494 cli_runner.go:164] Run: docker run --rm --name addons-244316-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-244316 --entrypoint /usr/bin/test -v addons-244316:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 19:25:50.909463  720494 cli_runner.go:217] Completed: docker run --rm --name addons-244316-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-244316 --entrypoint /usr/bin/test -v addons-244316:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (1.253168687s)
	I0920 19:25:50.909494  720494 oci.go:107] Successfully prepared a docker volume addons-244316
	I0920 19:25:50.909519  720494 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:50.909540  720494 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 19:25:50.909613  720494 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-244316:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 19:25:55.044839  720494 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-244316:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.135176316s)
	I0920 19:25:55.044877  720494 kic.go:203] duration metric: took 4.135334236s to extract preloaded images to volume ...
	W0920 19:25:55.045079  720494 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 19:25:55.045238  720494 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 19:25:55.111173  720494 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-244316 --name addons-244316 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-244316 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-244316 --network addons-244316 --ip 192.168.49.2 --volume addons-244316:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 19:25:55.497137  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Running}}
	I0920 19:25:55.513667  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:25:55.540765  720494 cli_runner.go:164] Run: docker exec addons-244316 stat /var/lib/dpkg/alternatives/iptables
	I0920 19:25:55.618535  720494 oci.go:144] the created container "addons-244316" has a running status.
	I0920 19:25:55.618561  720494 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa...
	I0920 19:25:55.937892  720494 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 19:25:55.968552  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:25:56.000829  720494 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 19:25:56.000849  720494 kic_runner.go:114] Args: [docker exec --privileged addons-244316 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 19:25:56.069963  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:25:56.090448  720494 machine.go:93] provisionDockerMachine start ...
	I0920 19:25:56.090542  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:56.110367  720494 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:56.110637  720494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:25:56.110647  720494 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:25:56.305168  720494 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-244316
	
	I0920 19:25:56.305282  720494 ubuntu.go:169] provisioning hostname "addons-244316"
	I0920 19:25:56.305398  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:56.346342  720494 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:56.346689  720494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:25:56.346713  720494 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-244316 && echo "addons-244316" | sudo tee /etc/hostname
	I0920 19:25:56.522053  720494 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-244316
	
	I0920 19:25:56.522136  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:56.542986  720494 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:56.543222  720494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:25:56.543240  720494 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-244316' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-244316/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-244316' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:25:56.688866  720494 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:25:56.688893  720494 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-712952/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-712952/.minikube}
	I0920 19:25:56.688933  720494 ubuntu.go:177] setting up certificates
	I0920 19:25:56.688948  720494 provision.go:84] configureAuth start
	I0920 19:25:56.689025  720494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-244316
	I0920 19:25:56.706017  720494 provision.go:143] copyHostCerts
	I0920 19:25:56.706108  720494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem (1082 bytes)
	I0920 19:25:56.706235  720494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem (1123 bytes)
	I0920 19:25:56.706299  720494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem (1675 bytes)
	I0920 19:25:56.706352  720494 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem org=jenkins.addons-244316 san=[127.0.0.1 192.168.49.2 addons-244316 localhost minikube]
	I0920 19:25:57.019466  720494 provision.go:177] copyRemoteCerts
	I0920 19:25:57.019547  720494 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:25:57.019592  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.036410  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.138382  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 19:25:57.166107  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 19:25:57.190820  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 19:25:57.215294  720494 provision.go:87] duration metric: took 526.319417ms to configureAuth
	I0920 19:25:57.215365  720494 ubuntu.go:193] setting minikube options for container-runtime
	I0920 19:25:57.215581  720494 config.go:182] Loaded profile config "addons-244316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:57.215698  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.232471  720494 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:57.232769  720494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:25:57.232792  720494 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:25:57.476783  720494 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:25:57.476851  720494 machine.go:96] duration metric: took 1.386383012s to provisionDockerMachine
	I0920 19:25:57.476877  720494 client.go:171] duration metric: took 9.523113336s to LocalClient.Create
	I0920 19:25:57.476912  720494 start.go:167] duration metric: took 9.523196543s to libmachine.API.Create "addons-244316"
	I0920 19:25:57.476938  720494 start.go:293] postStartSetup for "addons-244316" (driver="docker")
	I0920 19:25:57.476964  720494 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:25:57.477048  720494 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:25:57.477143  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.493786  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.597872  720494 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:25:57.601104  720494 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 19:25:57.601148  720494 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 19:25:57.601160  720494 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 19:25:57.601168  720494 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 19:25:57.601178  720494 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-712952/.minikube/addons for local assets ...
	I0920 19:25:57.601253  720494 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-712952/.minikube/files for local assets ...
	I0920 19:25:57.601279  720494 start.go:296] duration metric: took 124.321395ms for postStartSetup
	I0920 19:25:57.601598  720494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-244316
	I0920 19:25:57.617888  720494 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/config.json ...
	I0920 19:25:57.618195  720494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:25:57.618252  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.634402  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.737760  720494 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 19:25:57.742480  720494 start.go:128] duration metric: took 9.790578414s to createHost
	I0920 19:25:57.742508  720494 start.go:83] releasing machines lock for "addons-244316", held for 9.790720023s
	I0920 19:25:57.742594  720494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-244316
	I0920 19:25:57.760148  720494 ssh_runner.go:195] Run: cat /version.json
	I0920 19:25:57.760203  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.760211  720494 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:25:57.760279  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.783475  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.784627  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.880299  720494 ssh_runner.go:195] Run: systemctl --version
	I0920 19:25:58.009150  720494 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:25:58.154311  720494 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 19:25:58.159113  720494 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:25:58.179821  720494 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0920 19:25:58.179900  720494 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:25:58.211641  720494 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 19:25:58.211670  720494 start.go:495] detecting cgroup driver to use...
	I0920 19:25:58.211707  720494 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 19:25:58.211764  720494 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:25:58.227213  720494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:25:58.239238  720494 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:25:58.239307  720494 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:25:58.254293  720494 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:25:58.268754  720494 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:25:58.352765  720494 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:25:58.452761  720494 docker.go:233] disabling docker service ...
	I0920 19:25:58.452850  720494 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:25:58.472668  720494 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:25:58.485779  720494 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:25:58.573268  720494 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:25:58.666873  720494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:25:58.679533  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:25:58.698607  720494 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:25:58.698720  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.709753  720494 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:25:58.709850  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.721514  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.732020  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.743803  720494 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:25:58.754937  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.765725  720494 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.784156  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.795101  720494 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:25:58.804507  720494 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:25:58.814571  720494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:25:58.906740  720494 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:25:59.036841  720494 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:25:59.037033  720494 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:25:59.041940  720494 start.go:563] Will wait 60s for crictl version
	I0920 19:25:59.042029  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:25:59.046343  720494 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:25:59.091228  720494 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0920 19:25:59.091392  720494 ssh_runner.go:195] Run: crio --version
	I0920 19:25:59.133146  720494 ssh_runner.go:195] Run: crio --version
	I0920 19:25:59.173996  720494 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0920 19:25:59.175094  720494 cli_runner.go:164] Run: docker network inspect addons-244316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:25:59.194435  720494 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 19:25:59.198004  720494 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:25:59.208671  720494 kubeadm.go:883] updating cluster {Name:addons-244316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:25:59.208837  720494 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:59.208896  720494 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:25:59.284925  720494 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:25:59.284952  720494 crio.go:433] Images already preloaded, skipping extraction
	I0920 19:25:59.285011  720494 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:25:59.326795  720494 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:25:59.326826  720494 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:25:59.326836  720494 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0920 19:25:59.326938  720494 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-244316 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:25:59.327033  720494 ssh_runner.go:195] Run: crio config
	I0920 19:25:59.400041  720494 cni.go:84] Creating CNI manager for ""
	I0920 19:25:59.400067  720494 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 19:25:59.400078  720494 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:25:59.400123  720494 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-244316 NodeName:addons-244316 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:25:59.400318  720494 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-244316"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:25:59.400413  720494 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:25:59.409466  720494 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:25:59.409543  720494 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:25:59.418255  720494 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0920 19:25:59.436798  720494 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:25:59.454812  720494 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0920 19:25:59.472784  720494 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 19:25:59.476021  720494 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:25:59.487107  720494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:25:59.575326  720494 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:25:59.590207  720494 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316 for IP: 192.168.49.2
	I0920 19:25:59.590230  720494 certs.go:194] generating shared ca certs ...
	I0920 19:25:59.590247  720494 certs.go:226] acquiring lock for ca certs: {Name:mk7d5a5d7b3ae5cfc59d92978e91627e15e3360b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:59.590385  720494 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key
	I0920 19:26:01.128707  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt ...
	I0920 19:26:01.128744  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt: {Name:mk1e04770eebce03242f88886403fc8aaa4cfe20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:01.129575  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key ...
	I0920 19:26:01.129604  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key: {Name:mka1be98ed1f78200fab01b6e2e3e6b22c64df46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:01.130163  720494 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key
	I0920 19:26:01.605890  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt ...
	I0920 19:26:01.605926  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt: {Name:mk03b39bb6b8251d65137612cf5e860b85386060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:01.606164  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key ...
	I0920 19:26:01.606193  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key: {Name:mk84b5b286008c7b39f1846c3a68b7450ec1aa33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:01.606319  720494 certs.go:256] generating profile certs ...
	I0920 19:26:01.606400  720494 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.key
	I0920 19:26:01.606424  720494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt with IP's: []
	I0920 19:26:02.051551  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt ...
	I0920 19:26:02.051591  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: {Name:mk4ce0de29683e22275174265e154c929722a947 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.051776  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.key ...
	I0920 19:26:02.051790  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.key: {Name:mk93067dbaede2ab18fb6ecd46883d29e619fb22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.051868  720494 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key.37f1b239
	I0920 19:26:02.051891  720494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt.37f1b239 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 19:26:02.516359  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt.37f1b239 ...
	I0920 19:26:02.516396  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt.37f1b239: {Name:mk04066709546d402e3fb86d226ae85095f6ecbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.516605  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key.37f1b239 ...
	I0920 19:26:02.516620  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key.37f1b239: {Name:mkf95597b5bfdb7c10c9fa46a41da8ae82c6dd73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.516735  720494 certs.go:381] copying /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt.37f1b239 -> /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt
	I0920 19:26:02.516829  720494 certs.go:385] copying /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key.37f1b239 -> /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key
	I0920 19:26:02.516886  720494 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.key
	I0920 19:26:02.516908  720494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.crt with IP's: []
	I0920 19:26:02.897643  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.crt ...
	I0920 19:26:02.897677  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.crt: {Name:mk09dc4a7bfb678ac6c7e5b6b5d0beeda1b27aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.897877  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.key ...
	I0920 19:26:02.897893  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.key: {Name:mkdfdda2c3f5759ba75abfb95a8a24312a55704c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.898086  720494 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 19:26:02.898132  720494 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem (1082 bytes)
	I0920 19:26:02.898162  720494 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:26:02.898190  720494 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem (1675 bytes)
	I0920 19:26:02.898795  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:26:02.926518  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 19:26:02.955404  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:26:02.983641  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:26:03.014867  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 19:26:03.046742  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:26:03.076519  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:26:03.109906  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:26:03.141479  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:26:03.168462  720494 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:26:03.189520  720494 ssh_runner.go:195] Run: openssl version
	I0920 19:26:03.195282  720494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:26:03.206954  720494 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:26:03.211167  720494 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:26:03.211239  720494 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:26:03.218399  720494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:26:03.227694  720494 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:26:03.230917  720494 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 19:26:03.230978  720494 kubeadm.go:392] StartCluster: {Name:addons-244316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:26:03.231066  720494 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:26:03.231129  720494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:26:03.270074  720494 cri.go:89] found id: ""
	I0920 19:26:03.270153  720494 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:26:03.280624  720494 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:26:03.291274  720494 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 19:26:03.291459  720494 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:26:03.302610  720494 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:26:03.302644  720494 kubeadm.go:157] found existing configuration files:
	
	I0920 19:26:03.302713  720494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:26:03.313478  720494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:26:03.313591  720494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:26:03.323025  720494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:26:03.332499  720494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:26:03.332592  720494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:26:03.341716  720494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:26:03.351516  720494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:26:03.351613  720494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:26:03.362846  720494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:26:03.376977  720494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:26:03.377091  720494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:26:03.387441  720494 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 19:26:03.434194  720494 kubeadm.go:310] W0920 19:26:03.433484    1189 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:26:03.436312  720494 kubeadm.go:310] W0920 19:26:03.435724    1189 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:26:03.478542  720494 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0920 19:26:03.547044  720494 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:26:21.145323  720494 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:26:21.145408  720494 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:26:21.145508  720494 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 19:26:21.145578  720494 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 19:26:21.145618  720494 kubeadm.go:310] OS: Linux
	I0920 19:26:21.145685  720494 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 19:26:21.145790  720494 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 19:26:21.145851  720494 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 19:26:21.145900  720494 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 19:26:21.145958  720494 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 19:26:21.146008  720494 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 19:26:21.146053  720494 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 19:26:21.146100  720494 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 19:26:21.146148  720494 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 19:26:21.146220  720494 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:26:21.146332  720494 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:26:21.146434  720494 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:26:21.146500  720494 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:26:21.148270  720494 out.go:235]   - Generating certificates and keys ...
	I0920 19:26:21.148377  720494 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:26:21.148446  720494 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:26:21.148515  720494 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 19:26:21.148586  720494 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 19:26:21.148656  720494 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 19:26:21.148741  720494 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 19:26:21.148806  720494 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 19:26:21.148925  720494 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-244316 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 19:26:21.148982  720494 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 19:26:21.149096  720494 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-244316 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 19:26:21.149163  720494 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 19:26:21.149230  720494 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 19:26:21.149278  720494 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 19:26:21.149337  720494 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:26:21.149392  720494 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:26:21.149453  720494 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:26:21.149507  720494 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:26:21.149572  720494 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:26:21.149629  720494 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:26:21.149710  720494 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:26:21.149782  720494 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:26:21.150997  720494 out.go:235]   - Booting up control plane ...
	I0920 19:26:21.151103  720494 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:26:21.151182  720494 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:26:21.151253  720494 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:26:21.151362  720494 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:26:21.151450  720494 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:26:21.151493  720494 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:26:21.151625  720494 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:26:21.151731  720494 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:26:21.151792  720494 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.002195614s
	I0920 19:26:21.151866  720494 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:26:21.151928  720494 kubeadm.go:310] [api-check] The API server is healthy after 5.502091486s
	I0920 19:26:21.152036  720494 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:26:21.152163  720494 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:26:21.152225  720494 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:26:21.152406  720494 kubeadm.go:310] [mark-control-plane] Marking the node addons-244316 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:26:21.152465  720494 kubeadm.go:310] [bootstrap-token] Using token: z8az5e.wrm7la03ugzjp7n2
	I0920 19:26:21.154261  720494 out.go:235]   - Configuring RBAC rules ...
	I0920 19:26:21.154478  720494 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:26:21.154586  720494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:26:21.154732  720494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:26:21.154909  720494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:26:21.155048  720494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:26:21.155175  720494 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:26:21.155311  720494 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:26:21.155368  720494 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:26:21.155442  720494 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:26:21.155457  720494 kubeadm.go:310] 
	I0920 19:26:21.155528  720494 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:26:21.155538  720494 kubeadm.go:310] 
	I0920 19:26:21.155614  720494 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:26:21.155625  720494 kubeadm.go:310] 
	I0920 19:26:21.155651  720494 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:26:21.155712  720494 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:26:21.155767  720494 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:26:21.155774  720494 kubeadm.go:310] 
	I0920 19:26:21.155828  720494 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:26:21.155837  720494 kubeadm.go:310] 
	I0920 19:26:21.155891  720494 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:26:21.155899  720494 kubeadm.go:310] 
	I0920 19:26:21.155953  720494 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:26:21.156030  720494 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:26:21.156101  720494 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:26:21.156108  720494 kubeadm.go:310] 
	I0920 19:26:21.156190  720494 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:26:21.156274  720494 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:26:21.156280  720494 kubeadm.go:310] 
	I0920 19:26:21.156362  720494 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token z8az5e.wrm7la03ugzjp7n2 \
	I0920 19:26:21.156468  720494 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9dcbae36a1cb65f9099573ad9fac7ebc036c2eab288a010b4e8645c68ec99bdd \
	I0920 19:26:21.156491  720494 kubeadm.go:310] 	--control-plane 
	I0920 19:26:21.156500  720494 kubeadm.go:310] 
	I0920 19:26:21.156585  720494 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:26:21.156594  720494 kubeadm.go:310] 
	I0920 19:26:21.156675  720494 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token z8az5e.wrm7la03ugzjp7n2 \
	I0920 19:26:21.156882  720494 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9dcbae36a1cb65f9099573ad9fac7ebc036c2eab288a010b4e8645c68ec99bdd 
	I0920 19:26:21.156920  720494 cni.go:84] Creating CNI manager for ""
	I0920 19:26:21.156929  720494 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 19:26:21.158769  720494 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 19:26:21.160058  720494 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 19:26:21.164256  720494 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 19:26:21.164293  720494 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 19:26:21.182687  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 19:26:21.476771  720494 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:26:21.476873  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:21.476920  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-244316 minikube.k8s.io/updated_at=2024_09_20T19_26_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-244316 minikube.k8s.io/primary=true
	I0920 19:26:21.502145  720494 ops.go:34] apiserver oom_adj: -16
	I0920 19:26:21.606164  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:22.106239  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:22.606963  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:23.106410  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:23.607082  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:24.106927  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:24.606481  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:24.731028  720494 kubeadm.go:1113] duration metric: took 3.254235742s to wait for elevateKubeSystemPrivileges
	I0920 19:26:24.731059  720494 kubeadm.go:394] duration metric: took 21.500084875s to StartCluster
	I0920 19:26:24.731077  720494 settings.go:142] acquiring lock: {Name:mk4ddd924228bcf0d3a34d801111d62307b61b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:24.731199  720494 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:26:24.731573  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/kubeconfig: {Name:mk7d8753aacb2df257bd5191c7b120c25eed71dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:24.732243  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 19:26:24.732578  720494 config.go:182] Loaded profile config "addons-244316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:26:24.732726  720494 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 19:26:24.732818  720494 addons.go:69] Setting yakd=true in profile "addons-244316"
	I0920 19:26:24.732834  720494 addons.go:234] Setting addon yakd=true in "addons-244316"
	I0920 19:26:24.732858  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.733357  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.733547  720494 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:26:24.733884  720494 addons.go:69] Setting cloud-spanner=true in profile "addons-244316"
	I0920 19:26:24.733908  720494 addons.go:234] Setting addon cloud-spanner=true in "addons-244316"
	I0920 19:26:24.733933  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.734414  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.734698  720494 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-244316"
	I0920 19:26:24.734731  720494 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-244316"
	I0920 19:26:24.734761  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.735207  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.737708  720494 addons.go:69] Setting registry=true in profile "addons-244316"
	I0920 19:26:24.738394  720494 addons.go:234] Setting addon registry=true in "addons-244316"
	I0920 19:26:24.738478  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.738988  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.743232  720494 addons.go:69] Setting storage-provisioner=true in profile "addons-244316"
	I0920 19:26:24.743320  720494 addons.go:234] Setting addon storage-provisioner=true in "addons-244316"
	I0920 19:26:24.743377  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.743891  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.748317  720494 addons.go:69] Setting default-storageclass=true in profile "addons-244316"
	I0920 19:26:24.748414  720494 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-244316"
	I0920 19:26:24.748919  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.756797  720494 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-244316"
	I0920 19:26:24.756882  720494 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-244316"
	I0920 19:26:24.757259  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.762988  720494 addons.go:69] Setting gcp-auth=true in profile "addons-244316"
	I0920 19:26:24.763039  720494 mustload.go:65] Loading cluster: addons-244316
	I0920 19:26:24.763255  720494 config.go:182] Loaded profile config "addons-244316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:26:24.763519  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.763925  720494 addons.go:69] Setting ingress=true in profile "addons-244316"
	I0920 19:26:24.763954  720494 addons.go:234] Setting addon ingress=true in "addons-244316"
	I0920 19:26:24.763999  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.764434  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.765037  720494 addons.go:69] Setting volcano=true in profile "addons-244316"
	I0920 19:26:24.765061  720494 addons.go:234] Setting addon volcano=true in "addons-244316"
	I0920 19:26:24.765092  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.765521  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.776878  720494 addons.go:69] Setting ingress-dns=true in profile "addons-244316"
	I0920 19:26:24.776920  720494 addons.go:234] Setting addon ingress-dns=true in "addons-244316"
	I0920 19:26:24.776986  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.777889  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.784771  720494 addons.go:69] Setting volumesnapshots=true in profile "addons-244316"
	I0920 19:26:24.784812  720494 addons.go:234] Setting addon volumesnapshots=true in "addons-244316"
	I0920 19:26:24.784851  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.785348  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.791077  720494 addons.go:69] Setting inspektor-gadget=true in profile "addons-244316"
	I0920 19:26:24.791114  720494 addons.go:234] Setting addon inspektor-gadget=true in "addons-244316"
	I0920 19:26:24.791157  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.791640  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.804082  720494 addons.go:69] Setting metrics-server=true in profile "addons-244316"
	I0920 19:26:24.804161  720494 out.go:177] * Verifying Kubernetes components...
	I0920 19:26:24.811690  720494 addons.go:234] Setting addon metrics-server=true in "addons-244316"
	I0920 19:26:24.811764  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.812275  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.738362  720494 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-244316"
	I0920 19:26:24.829194  720494 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-244316"
	I0920 19:26:24.829235  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.829723  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.850636  720494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:26:24.850683  720494 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 19:26:24.876752  720494 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 19:26:24.886705  720494 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 19:26:24.893510  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.895810  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 19:26:24.895828  720494 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 19:26:24.895890  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.904782  720494 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 19:26:24.906634  720494 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 19:26:24.911282  720494 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 19:26:24.911362  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 19:26:24.911513  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.914963  720494 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 19:26:24.915046  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 19:26:24.915159  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.920961  720494 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 19:26:24.921038  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 19:26:24.921135  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.958752  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 19:26:24.960237  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 19:26:24.960307  720494 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 19:26:24.964077  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.967158  720494 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 19:26:24.968275  720494 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-244316"
	I0920 19:26:24.968317  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.971597  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.988157  720494 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 19:26:24.988245  720494 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 19:26:24.988355  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.008361  720494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:26:25.008545  720494 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 19:26:25.021818  720494 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:26:25.027173  720494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 19:26:25.028992  720494 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 19:26:25.029020  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 19:26:25.029087  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.030454  720494 addons.go:234] Setting addon default-storageclass=true in "addons-244316"
	I0920 19:26:25.030503  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:25.030960  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:25.049224  720494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:26:25.057818  720494 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:26:25.057891  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:26:25.057977  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.069805  720494 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 19:26:25.069834  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 19:26:25.069903  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.073303  720494 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 19:26:25.074477  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 19:26:25.105627  720494 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:26:25.105745  720494 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:26:25.105935  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.122314  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.123447  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 19:26:25.124754  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W0920 19:26:25.125412  720494 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 19:26:25.130076  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 19:26:25.133405  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 19:26:25.135850  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 19:26:25.142193  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 19:26:25.143570  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 19:26:25.145106  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 19:26:25.146269  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 19:26:25.146298  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 19:26:25.146395  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.191104  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.212528  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.248811  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.265661  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.284008  720494 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:26:25.284030  720494 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:26:25.284093  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.287036  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.299833  720494 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 19:26:25.305881  720494 out.go:177]   - Using image docker.io/busybox:stable
	I0920 19:26:25.312108  720494 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 19:26:25.312162  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 19:26:25.312243  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.316161  720494 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:26:25.348052  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.368019  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.368977  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.369628  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.379009  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.401631  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.411379  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.628116  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 19:26:25.691608  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 19:26:25.691689  720494 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 19:26:25.754633  720494 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 19:26:25.754725  720494 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 19:26:25.783343  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 19:26:25.807311  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:26:25.811289  720494 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 19:26:25.811364  720494 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 19:26:25.814757  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 19:26:25.814831  720494 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 19:26:25.822773  720494 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:26:25.822846  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 19:26:25.847634  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:26:25.850739  720494 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 19:26:25.850817  720494 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 19:26:25.871877  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 19:26:25.874400  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 19:26:25.904843  720494 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 19:26:25.904924  720494 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 19:26:25.931038  720494 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:26:25.931145  720494 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:26:25.937014  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 19:26:25.937091  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 19:26:25.946447  720494 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 19:26:25.946510  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 19:26:25.952435  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 19:26:26.003047  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 19:26:26.003132  720494 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 19:26:26.055471  720494 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 19:26:26.055563  720494 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 19:26:26.058191  720494 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 19:26:26.058277  720494 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 19:26:26.108523  720494 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:26:26.108607  720494 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:26:26.120555  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 19:26:26.120639  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 19:26:26.143676  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 19:26:26.143751  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 19:26:26.159321  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 19:26:26.223265  720494 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 19:26:26.223347  720494 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 19:26:26.239958  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:26:26.290920  720494 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 19:26:26.291005  720494 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 19:26:26.309158  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 19:26:26.335256  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 19:26:26.335338  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 19:26:26.351619  720494 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 19:26:26.351703  720494 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 19:26:26.442556  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 19:26:26.442634  720494 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 19:26:26.510624  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 19:26:26.510716  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 19:26:26.521274  720494 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 19:26:26.521400  720494 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 19:26:26.561917  720494 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:26:26.562042  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 19:26:26.611747  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 19:26:26.611825  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 19:26:26.612177  720494 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 19:26:26.612229  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 19:26:26.627234  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:26:26.678176  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 19:26:26.678252  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 19:26:26.694345  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 19:26:26.778377  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 19:26:26.778461  720494 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 19:26:26.948977  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 19:26:26.949051  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 19:26:27.077619  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 19:26:27.077706  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 19:26:27.165214  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 19:26:27.165330  720494 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 19:26:27.323372  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 19:26:28.781000  720494 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.657515938s)
	I0920 19:26:28.781029  720494 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 19:26:28.782336  720494 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.466143903s)
	I0920 19:26:28.783478  720494 node_ready.go:35] waiting up to 6m0s for node "addons-244316" to be "Ready" ...
	I0920 19:26:28.800679  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.172462658s)
	I0920 19:26:29.525301  720494 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-244316" context rescaled to 1 replicas
	I0920 19:26:30.797646  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:31.267609  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.460211174s)
	I0920 19:26:31.267696  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.419987423s)
	I0920 19:26:31.267729  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.395787758s)
	I0920 19:26:31.267762  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.393293006s)
	I0920 19:26:31.267799  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.315298297s)
	I0920 19:26:31.267824  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.108422582s)
	I0920 19:26:31.268250  720494 addons.go:475] Verifying addon registry=true in "addons-244316"
	I0920 19:26:31.268429  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.484989303s)
	I0920 19:26:31.268458  720494 addons.go:475] Verifying addon ingress=true in "addons-244316"
	I0920 19:26:31.267881  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.02785039s)
	I0920 19:26:31.268830  720494 addons.go:475] Verifying addon metrics-server=true in "addons-244316"
	I0920 19:26:31.267910  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.958677932s)
	I0920 19:26:31.271284  720494 out.go:177] * Verifying registry addon...
	I0920 19:26:31.271359  720494 out.go:177] * Verifying ingress addon...
	I0920 19:26:31.272983  720494 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-244316 service yakd-dashboard -n yakd-dashboard
	
	I0920 19:26:31.275860  720494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 19:26:31.277001  720494 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 19:26:31.316121  720494 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 19:26:31.316161  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:31.317362  720494 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 19:26:31.317387  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0920 19:26:31.351356  720494 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 19:26:31.428421  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.80109277s)
	W0920 19:26:31.428552  720494 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 19:26:31.428610  720494 retry.go:31] will retry after 262.995193ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 19:26:31.428729  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.734302903s)
	I0920 19:26:31.692832  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:26:31.785593  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:31.787168  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:31.807816  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.48433629s)
	I0920 19:26:31.807856  720494 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-244316"
	I0920 19:26:31.812633  720494 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 19:26:31.816455  720494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 19:26:31.827502  720494 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 19:26:31.827532  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:32.319083  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:32.338243  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:32.343594  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:32.780997  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:32.782488  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:32.821766  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:33.292797  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:33.293772  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:33.294669  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:33.320656  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:33.550663  720494 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 19:26:33.550800  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:33.572839  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:33.737603  720494 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 19:26:33.782544  720494 addons.go:234] Setting addon gcp-auth=true in "addons-244316"
	I0920 19:26:33.782601  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:33.783145  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:33.786655  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:33.788546  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:33.798699  720494 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 19:26:33.798751  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:33.821497  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:33.823451  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:34.284102  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:34.289618  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:34.323390  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:34.779640  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:34.781007  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:34.820473  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:34.992350  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.299433758s)
	I0920 19:26:34.992431  720494 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.193714155s)
	I0920 19:26:34.995444  720494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:26:34.997869  720494 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 19:26:35.001709  720494 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 19:26:35.001756  720494 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 19:26:35.035728  720494 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 19:26:35.035759  720494 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 19:26:35.079944  720494 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 19:26:35.079979  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 19:26:35.102984  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 19:26:35.294596  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:35.295376  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:35.296628  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:35.323531  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:35.758392  720494 addons.go:475] Verifying addon gcp-auth=true in "addons-244316"
	I0920 19:26:35.761764  720494 out.go:177] * Verifying gcp-auth addon...
	I0920 19:26:35.765377  720494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 19:26:35.775895  720494 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 19:26:35.775929  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:35.783065  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:35.788830  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:35.820810  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:36.269858  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:36.279954  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:36.283043  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:36.320621  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:36.768866  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:36.779447  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:36.781676  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:36.820993  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:37.269034  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:37.282448  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:37.285567  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:37.321631  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:37.773878  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:37.779965  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:37.784905  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:37.788061  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:37.822000  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:38.269379  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:38.281874  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:38.282763  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:38.320741  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:38.769226  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:38.780873  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:38.782249  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:38.821403  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:39.269763  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:39.282689  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:39.283733  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:39.319865  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:39.770281  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:39.780666  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:39.781505  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:39.819986  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:40.269726  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:40.284516  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:40.288013  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:40.289324  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:40.321134  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:40.768854  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:40.781445  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:40.782576  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:40.820776  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:41.270401  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:41.282485  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:41.286141  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:41.320497  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:41.769918  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:41.781588  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:41.781944  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:41.820204  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:42.269713  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:42.283956  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:42.285628  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:42.289754  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:42.324052  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:42.768905  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:42.779837  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:42.781697  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:42.820416  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:43.269483  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:43.282444  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:43.289492  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:43.320876  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:43.769566  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:43.780213  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:43.781387  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:43.820582  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:44.268685  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:44.283172  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:44.284818  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:44.319863  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:44.768823  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:44.779384  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:44.780902  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:44.787478  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:44.820117  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:45.271913  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:45.292436  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:45.292763  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:45.321040  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:45.768511  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:45.780265  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:45.781678  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:45.819905  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:46.268442  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:46.281477  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:46.283481  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:46.321173  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:46.769096  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:46.780569  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:46.781603  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:46.820668  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:47.269903  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:47.283811  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:47.285562  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:47.288580  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:47.321196  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:47.769692  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:47.780998  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:47.781170  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:47.820425  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:48.269042  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:48.280340  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:48.282709  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:48.320815  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:48.775148  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:48.780544  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:48.780639  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:48.819936  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:49.268525  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:49.287431  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:49.289191  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:49.290872  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:49.319847  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:49.769384  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:49.779298  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:49.781188  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:49.820383  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:50.269301  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:50.282411  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:50.285240  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:50.321060  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:50.769473  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:50.779443  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:50.781486  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:50.820597  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:51.270112  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:51.282639  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:51.283088  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:51.320080  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:51.770106  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:51.780583  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:51.782027  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:51.787638  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:51.821886  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:52.268519  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:52.283176  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:52.284064  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:52.320872  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:52.769214  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:52.780683  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:52.781634  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:52.820510  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:53.268723  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:53.282200  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:53.283249  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:53.319884  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:53.769787  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:53.779902  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:53.781216  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:53.787956  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:53.820259  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:54.268727  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:54.284290  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:54.286675  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:54.320499  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:54.770197  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:54.780336  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:54.780886  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:54.872159  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:55.269901  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:55.283331  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:55.284928  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:55.322123  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:55.769377  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:55.786795  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:55.788318  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:55.792405  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:55.820273  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:56.269247  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:56.282020  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:56.282671  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:56.320941  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:56.768548  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:56.779663  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:56.781168  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:56.823458  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:57.270683  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:57.280849  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:57.289341  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:57.320313  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:57.770057  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:57.781886  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:57.782805  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:57.820734  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:58.269602  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:58.287535  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:58.289519  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:58.290728  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:58.320213  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:58.775347  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:58.780054  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:58.779009  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:58.820774  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:59.270294  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:59.286626  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:59.286677  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:59.320640  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:59.769280  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:59.778971  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:59.782087  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:59.820716  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:00.309244  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:00.318305  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:00.319591  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:00.339041  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:00.343407  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:00.769300  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:00.780065  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:00.781157  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:00.820504  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:01.269978  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:01.280960  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:01.281807  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:01.320569  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:01.770020  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:01.779716  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:01.780878  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:01.820495  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:02.268783  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:02.288170  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:02.289424  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:02.320500  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:02.769169  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:02.779328  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:02.780818  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:02.787295  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:02.820714  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:03.269333  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:03.282502  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:03.283193  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:03.320347  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:03.768910  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:03.779162  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:03.786884  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:03.820888  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:04.268561  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:04.282839  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:04.286144  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:04.319847  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:04.769755  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:04.779324  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:04.781769  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:04.787978  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:04.819936  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:05.269186  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:05.279197  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:05.282877  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:05.320475  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:05.768569  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:05.780966  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:05.781692  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:05.820438  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:06.268480  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:06.281239  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:06.282090  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:06.322308  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:06.769661  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:06.779803  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:06.781464  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:06.819852  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:07.268837  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:07.281876  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:07.284988  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:07.286363  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:07.320953  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:07.769044  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:07.779879  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:07.781635  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:07.820446  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:08.269819  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:08.279826  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:08.282143  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:08.320863  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:08.769566  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:08.780628  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:08.781408  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:08.820608  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:09.269486  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:09.282792  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:09.284787  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:09.288825  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:09.320872  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:09.771436  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:09.871052  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:09.871872  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:09.872778  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:10.268788  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:10.281335  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:10.282038  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:10.320121  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:10.790338  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:10.798021  720494 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 19:27:10.798094  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:10.802689  720494 node_ready.go:49] node "addons-244316" has status "Ready":"True"
	I0920 19:27:10.802757  720494 node_ready.go:38] duration metric: took 42.019246373s for node "addons-244316" to be "Ready" ...
	I0920 19:27:10.802790  720494 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:27:10.812816  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:10.826520  720494 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 19:27:10.826550  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:10.835433  720494 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-22l55" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.293900  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:11.305989  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:11.307104  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:11.332577  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:11.783357  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:11.784517  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:11.784931  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:11.849334  720494 pod_ready.go:93] pod "coredns-7c65d6cfc9-22l55" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.849418  720494 pod_ready.go:82] duration metric: took 1.013937392s for pod "coredns-7c65d6cfc9-22l55" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.849456  720494 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.868495  720494 pod_ready.go:93] pod "etcd-addons-244316" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.868569  720494 pod_ready.go:82] duration metric: took 19.076003ms for pod "etcd-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.868600  720494 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.875423  720494 pod_ready.go:93] pod "kube-apiserver-addons-244316" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.875560  720494 pod_ready.go:82] duration metric: took 6.929545ms for pod "kube-apiserver-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.875595  720494 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.879213  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:11.884216  720494 pod_ready.go:93] pod "kube-controller-manager-addons-244316" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.884288  720494 pod_ready.go:82] duration metric: took 8.628615ms for pod "kube-controller-manager-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.884318  720494 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2cdvm" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.988094  720494 pod_ready.go:93] pod "kube-proxy-2cdvm" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.988130  720494 pod_ready.go:82] duration metric: took 103.789214ms for pod "kube-proxy-2cdvm" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.988147  720494 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:12.269264  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:12.287208  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:12.289033  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:12.322571  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:12.388606  720494 pod_ready.go:93] pod "kube-scheduler-addons-244316" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:12.388638  720494 pod_ready.go:82] duration metric: took 400.478914ms for pod "kube-scheduler-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:12.388653  720494 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:12.770087  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:12.781393  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:12.785622  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:12.822319  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:13.269603  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:13.296337  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:13.296766  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:13.322590  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:13.769847  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:13.779693  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:13.782433  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:13.822091  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:14.269252  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:14.280182  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:14.284723  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:14.322263  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:14.398349  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:14.770832  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:14.783054  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:14.784559  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:14.822909  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:15.270387  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:15.285696  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:15.290910  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:15.326172  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:15.770026  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:15.783567  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:15.785272  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:15.824485  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:16.270241  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:16.284794  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:16.285654  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:16.323741  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:16.770988  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:16.786341  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:16.788159  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:16.824214  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:16.898178  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:17.268906  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:17.285452  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:17.297778  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:17.323090  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:17.770096  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:17.783132  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:17.791255  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:17.822351  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:18.269424  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:18.280994  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:18.282707  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:18.321372  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:18.769666  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:18.781587  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:18.784235  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:18.822682  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:19.269470  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:19.283677  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:19.288629  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:19.321699  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:19.396588  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:19.772980  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:19.780803  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:19.782719  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:19.875556  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:20.269492  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:20.291670  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:20.292866  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:20.337167  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:20.773159  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:20.784988  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:20.787988  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:20.872046  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:21.269963  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:21.282783  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:21.286803  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:21.322202  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:21.405583  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:21.783199  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:21.783670  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:21.784876  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:21.821833  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:22.269088  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:22.284065  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:22.285339  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:22.321055  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:22.770100  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:22.781884  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:22.782968  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:22.823045  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:23.270418  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:23.303569  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:23.309281  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:23.339078  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:23.769506  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:23.783194  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:23.785917  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:23.822439  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:23.897984  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:24.269340  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:24.291455  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:24.292763  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:24.323361  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:24.769968  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:24.782088  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:24.783038  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:24.822751  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:25.272308  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:25.283430  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:25.283627  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:25.374953  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:25.769005  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:25.780915  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:25.781787  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:25.823930  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:25.903531  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:26.269834  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:26.282530  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:26.283167  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:26.322054  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:26.772379  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:26.782423  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:26.783631  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:26.825085  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:27.269779  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:27.284418  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:27.284806  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:27.338146  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:27.769342  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:27.780314  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:27.781854  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:27.821476  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:28.269286  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:28.286594  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:28.287661  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:28.321326  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:28.396550  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:28.769726  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:28.786411  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:28.789226  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:28.823669  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:29.273075  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:29.294479  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:29.294798  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:29.321458  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:29.783611  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:29.801323  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:29.802103  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:29.822898  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:30.270237  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:30.281437  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:30.288878  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:30.323004  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:30.397272  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:30.770019  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:30.781553  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:30.784063  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:30.821814  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:31.274829  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:31.376343  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:31.376669  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:31.377851  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:31.770717  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:31.872819  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:31.874407  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:31.874951  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:32.269835  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:32.289378  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:32.296761  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:32.336615  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:32.770473  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:32.780243  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:32.783111  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:32.822024  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:32.895154  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:33.269100  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:33.285151  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:33.286306  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:33.321947  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:33.769592  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:33.785588  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:33.787506  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:33.823169  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:34.270017  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:34.296394  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:34.298155  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:34.323308  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:34.771050  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:34.779942  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:34.783226  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:34.823169  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:34.896141  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:35.271216  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:35.287615  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:35.287751  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:35.321772  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:35.769122  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:35.779720  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:35.783006  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:35.825488  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:36.271960  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:36.283707  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:36.285920  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:36.323409  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:36.769923  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:36.783227  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:36.784870  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:36.823594  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:36.898558  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:37.269755  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:37.293610  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:37.295883  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:37.324183  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:37.770650  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:37.787794  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:37.790021  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:37.825192  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:38.271469  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:38.286644  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:38.295621  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:38.372751  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:38.770626  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:38.783653  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:38.785086  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:38.828413  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:38.899046  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:39.269283  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:39.289657  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:39.290851  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:39.322100  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:39.769808  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:39.780111  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:39.782297  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:39.822102  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:40.269888  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:40.283081  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:40.289576  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:40.321540  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:40.771932  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:40.786292  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:40.787574  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:40.822514  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:40.902622  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:41.293096  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:41.293655  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:41.295135  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:41.383616  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:41.769623  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:41.780568  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:41.782685  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:41.821358  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:42.270092  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:42.283534  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:42.285074  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:42.323092  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:42.769472  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:42.783473  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:42.784385  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:42.821487  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:42.910866  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:43.269586  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:43.283137  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:43.284561  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:43.322062  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:43.770396  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:43.783706  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:43.785318  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:43.874138  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:44.270492  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:44.288382  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:44.289311  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:44.323291  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:44.772398  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:44.784708  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:44.789269  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:44.828934  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:45.270427  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:45.293926  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:45.297006  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:45.330624  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:45.395524  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:45.770375  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:45.780214  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:45.782897  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:45.821691  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:46.269691  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:46.287920  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:46.290547  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:46.321363  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:46.769449  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:46.780741  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:46.781790  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:46.821438  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:47.268967  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:47.283856  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:47.288484  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:47.321138  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:47.771747  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:47.782286  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:47.782867  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:47.821598  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:47.901855  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:48.270415  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:48.283534  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:48.293559  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:48.321254  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:48.769759  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:48.783475  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:48.784034  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:48.821245  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:49.269519  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:49.296283  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:49.297159  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:49.321855  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:49.769549  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:49.786083  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:49.787661  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:49.836279  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:49.905574  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:50.269880  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:50.284332  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:50.285274  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:50.326269  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:50.770583  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:50.784496  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:50.786832  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:50.821866  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:51.272774  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:51.288246  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:51.300589  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:51.328745  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:51.769682  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:51.784224  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:51.786399  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:51.822610  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:52.270010  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:52.284491  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:52.296634  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:52.321591  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:52.395168  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:52.769363  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:52.803871  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:52.804636  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:52.851054  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:53.269200  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:53.291143  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:53.292306  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:53.320768  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:53.769623  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:53.780255  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:53.781495  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:53.821099  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:54.270051  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:54.280279  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:54.286682  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:54.321233  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:54.397046  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:54.769210  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:54.781271  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:54.781800  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:54.821639  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:55.269499  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:55.283112  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:55.288430  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:55.321614  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:55.770291  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:55.780427  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:55.783414  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:55.821748  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:56.269112  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:56.297598  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:56.299043  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:56.322662  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:56.769391  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:56.782271  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:56.785981  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:56.822587  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:56.895669  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:57.269104  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:57.283331  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:57.285039  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:57.324318  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:57.770711  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:57.785695  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:57.786564  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:57.821295  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:58.270847  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:58.297070  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:58.299954  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:58.324589  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:58.770818  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:58.783118  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:58.784755  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:58.824340  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:58.898454  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:59.269608  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:59.288962  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:59.289893  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:59.327209  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:59.771301  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:59.779197  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:59.782085  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:59.821907  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:00.314086  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:28:00.315524  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:00.315879  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:00.377221  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:00.769699  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:00.782012  720494 kapi.go:107] duration metric: took 1m29.50615052s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 19:28:00.785641  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:00.821608  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:00.904903  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:01.273520  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:01.285242  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:01.322849  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:01.769313  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:01.783195  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:01.822914  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:02.274020  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:02.298141  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:02.326441  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:02.780076  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:02.785058  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:02.822665  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:03.268604  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:03.283597  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:03.321554  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:03.395718  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:03.768851  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:03.781855  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:03.823928  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:04.272216  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:04.283815  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:04.321512  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:04.769441  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:04.781911  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:04.821714  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:05.273285  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:05.286463  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:05.321734  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:05.400983  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:05.768739  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:05.782803  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:05.822520  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:06.271932  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:06.284198  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:06.322312  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:06.769439  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:06.781469  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:06.821798  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:07.269345  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:07.282286  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:07.321981  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:07.768935  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:07.782910  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:07.822376  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:07.899845  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:08.270993  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:08.281747  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:08.373020  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:08.769168  720494 kapi.go:107] duration metric: took 1m33.003789569s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 19:28:08.771038  720494 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-244316 cluster.
	I0920 19:28:08.772384  720494 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 19:28:08.773719  720494 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 19:28:08.781583  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:08.821332  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:09.282252  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:09.322029  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:09.783523  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:09.822921  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:09.902756  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:10.296308  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:10.322822  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:10.781762  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:10.822736  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:11.297609  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:11.321398  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:11.788233  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:11.824483  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:12.282873  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:12.322536  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:12.397997  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:12.782445  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:12.821032  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:13.288861  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:13.329878  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:13.781557  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:13.821498  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:14.290567  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:14.397119  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:14.401354  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:14.782289  720494 kapi.go:107] duration metric: took 1m43.505284147s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 19:28:14.821777  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:15.321877  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:15.834652  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:16.322429  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:16.822634  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:16.895178  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:17.323711  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:17.821392  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:18.326695  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:18.826947  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:18.895775  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:19.322832  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:19.825859  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:20.326263  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:20.825646  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:20.902662  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:21.322196  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:21.822435  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:22.322167  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:22.824989  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:23.322738  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:23.399445  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:23.822550  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:24.322519  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:24.824176  720494 kapi.go:107] duration metric: took 1m53.007723649s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 19:28:24.825669  720494 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0920 19:28:24.826761  720494 addons.go:510] duration metric: took 2m0.094026687s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0920 19:28:25.896052  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:27.896324  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:30.395200  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:32.895750  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:34.896053  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:37.396563  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:39.396837  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:41.895058  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:43.896042  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:45.907685  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:48.395599  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:50.895101  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:51.396083  720494 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"True"
	I0920 19:28:51.396121  720494 pod_ready.go:82] duration metric: took 1m39.007452648s for pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace to be "Ready" ...
	I0920 19:28:51.396140  720494 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-n79hn" in "kube-system" namespace to be "Ready" ...
	I0920 19:28:51.402155  720494 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-n79hn" in "kube-system" namespace has status "Ready":"True"
	I0920 19:28:51.402182  720494 pod_ready.go:82] duration metric: took 6.032492ms for pod "nvidia-device-plugin-daemonset-n79hn" in "kube-system" namespace to be "Ready" ...
	I0920 19:28:51.402206  720494 pod_ready.go:39] duration metric: took 1m40.599394134s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:28:51.402223  720494 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:28:51.402271  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:28:51.402336  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:28:51.456299  720494 cri.go:89] found id: "7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:28:51.456320  720494 cri.go:89] found id: ""
	I0920 19:28:51.456328  720494 logs.go:276] 1 containers: [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb]
	I0920 19:28:51.456393  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.460648  720494 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:28:51.460789  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:28:51.505091  720494 cri.go:89] found id: "a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:28:51.505116  720494 cri.go:89] found id: ""
	I0920 19:28:51.505128  720494 logs.go:276] 1 containers: [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56]
	I0920 19:28:51.505189  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.509129  720494 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:28:51.509207  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:28:51.562231  720494 cri.go:89] found id: "057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:28:51.562252  720494 cri.go:89] found id: ""
	I0920 19:28:51.562260  720494 logs.go:276] 1 containers: [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78]
	I0920 19:28:51.562319  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.566016  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:28:51.566137  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:28:51.603264  720494 cri.go:89] found id: "4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:28:51.603287  720494 cri.go:89] found id: ""
	I0920 19:28:51.603295  720494 logs.go:276] 1 containers: [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232]
	I0920 19:28:51.603353  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.606913  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:28:51.606987  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:28:51.652913  720494 cri.go:89] found id: "f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:28:51.652935  720494 cri.go:89] found id: ""
	I0920 19:28:51.652943  720494 logs.go:276] 1 containers: [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba]
	I0920 19:28:51.653002  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.656955  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:28:51.657040  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:28:51.704412  720494 cri.go:89] found id: "be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:28:51.704438  720494 cri.go:89] found id: ""
	I0920 19:28:51.704447  720494 logs.go:276] 1 containers: [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8]
	I0920 19:28:51.704534  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.708634  720494 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:28:51.708744  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:28:51.752746  720494 cri.go:89] found id: "4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:28:51.752776  720494 cri.go:89] found id: ""
	I0920 19:28:51.752785  720494 logs.go:276] 1 containers: [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1]
	I0920 19:28:51.752879  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.758970  720494 logs.go:123] Gathering logs for kube-apiserver [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb] ...
	I0920 19:28:51.759003  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:28:51.819975  720494 logs.go:123] Gathering logs for etcd [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56] ...
	I0920 19:28:51.820014  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:28:51.876012  720494 logs.go:123] Gathering logs for kube-proxy [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba] ...
	I0920 19:28:51.876043  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:28:51.921789  720494 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:28:51.921823  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:28:52.030887  720494 logs.go:123] Gathering logs for kube-controller-manager [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8] ...
	I0920 19:28:52.030941  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:28:52.115160  720494 logs.go:123] Gathering logs for kindnet [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1] ...
	I0920 19:28:52.115293  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:28:52.178170  720494 logs.go:123] Gathering logs for container status ...
	I0920 19:28:52.178239  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:28:52.241811  720494 logs.go:123] Gathering logs for kubelet ...
	I0920 19:28:52.241847  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 19:28:52.266767  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:28:52.267022  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:28:52.267252  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:28:52.267486  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:28:52.327738  720494 logs.go:123] Gathering logs for dmesg ...
	I0920 19:28:52.327779  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:28:52.346639  720494 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:28:52.346670  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:28:52.535292  720494 logs.go:123] Gathering logs for coredns [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78] ...
	I0920 19:28:52.535323  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:28:52.598442  720494 logs.go:123] Gathering logs for kube-scheduler [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232] ...
	I0920 19:28:52.598473  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:28:52.654339  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:28:52.654372  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 19:28:52.654455  720494 out.go:270] X Problems detected in kubelet:
	W0920 19:28:52.654469  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:28:52.654489  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:28:52.654500  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:28:52.654507  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:28:52.654512  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:28:52.654519  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:29:02.655826  720494 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:29:02.669891  720494 api_server.go:72] duration metric: took 2m37.936293093s to wait for apiserver process to appear ...
	I0920 19:29:02.669918  720494 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:29:02.669953  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:29:02.670013  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:29:02.709792  720494 cri.go:89] found id: "7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:29:02.709821  720494 cri.go:89] found id: ""
	I0920 19:29:02.709830  720494 logs.go:276] 1 containers: [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb]
	I0920 19:29:02.709905  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.713936  720494 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:29:02.714022  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:29:02.758325  720494 cri.go:89] found id: "a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:29:02.758351  720494 cri.go:89] found id: ""
	I0920 19:29:02.758360  720494 logs.go:276] 1 containers: [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56]
	I0920 19:29:02.758421  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.762432  720494 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:29:02.762517  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:29:02.816194  720494 cri.go:89] found id: "057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:29:02.816229  720494 cri.go:89] found id: ""
	I0920 19:29:02.816254  720494 logs.go:276] 1 containers: [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78]
	I0920 19:29:02.816358  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.820412  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:29:02.820495  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:29:02.868008  720494 cri.go:89] found id: "4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:29:02.868057  720494 cri.go:89] found id: ""
	I0920 19:29:02.868066  720494 logs.go:276] 1 containers: [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232]
	I0920 19:29:02.868176  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.872662  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:29:02.872784  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:29:02.922423  720494 cri.go:89] found id: "f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:29:02.922448  720494 cri.go:89] found id: ""
	I0920 19:29:02.922457  720494 logs.go:276] 1 containers: [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba]
	I0920 19:29:02.922570  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.926673  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:29:02.926808  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:29:02.974679  720494 cri.go:89] found id: "be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:29:02.974703  720494 cri.go:89] found id: ""
	I0920 19:29:02.974712  720494 logs.go:276] 1 containers: [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8]
	I0920 19:29:02.974773  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.978454  720494 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:29:02.978565  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:29:03.024328  720494 cri.go:89] found id: "4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:29:03.024410  720494 cri.go:89] found id: ""
	I0920 19:29:03.024433  720494 logs.go:276] 1 containers: [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1]
	I0920 19:29:03.024509  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:03.028984  720494 logs.go:123] Gathering logs for kube-proxy [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba] ...
	I0920 19:29:03.029059  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:29:03.078751  720494 logs.go:123] Gathering logs for kindnet [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1] ...
	I0920 19:29:03.078784  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:29:03.123529  720494 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:29:03.123565  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:29:03.267729  720494 logs.go:123] Gathering logs for kube-scheduler [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232] ...
	I0920 19:29:03.267765  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:29:03.319964  720494 logs.go:123] Gathering logs for kube-apiserver [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb] ...
	I0920 19:29:03.319999  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:29:03.377209  720494 logs.go:123] Gathering logs for etcd [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56] ...
	I0920 19:29:03.377254  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:29:03.430429  720494 logs.go:123] Gathering logs for coredns [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78] ...
	I0920 19:29:03.430466  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:29:03.479287  720494 logs.go:123] Gathering logs for kube-controller-manager [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8] ...
	I0920 19:29:03.479326  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:29:03.561312  720494 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:29:03.561350  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:29:03.668739  720494 logs.go:123] Gathering logs for container status ...
	I0920 19:29:03.668801  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:29:03.732250  720494 logs.go:123] Gathering logs for kubelet ...
	I0920 19:29:03.732283  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 19:29:03.763347  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:03.763596  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:29:03.763788  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:03.764019  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:29:03.824458  720494 logs.go:123] Gathering logs for dmesg ...
	I0920 19:29:03.824495  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:29:03.842781  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:29:03.842807  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 19:29:03.842859  720494 out.go:270] X Problems detected in kubelet:
	W0920 19:29:03.842874  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:03.842882  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:29:03.842891  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:03.842901  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:29:03.842906  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:29:03.842912  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:29:13.844440  720494 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:29:13.852275  720494 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 19:29:13.853640  720494 api_server.go:141] control plane version: v1.31.1
	I0920 19:29:13.853669  720494 api_server.go:131] duration metric: took 11.183744147s to wait for apiserver health ...
	I0920 19:29:13.853678  720494 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:29:13.853701  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:29:13.853773  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:29:13.894321  720494 cri.go:89] found id: "7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:29:13.894346  720494 cri.go:89] found id: ""
	I0920 19:29:13.894354  720494 logs.go:276] 1 containers: [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb]
	I0920 19:29:13.894418  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:13.898250  720494 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:29:13.898360  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:29:13.941458  720494 cri.go:89] found id: "a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:29:13.941492  720494 cri.go:89] found id: ""
	I0920 19:29:13.941500  720494 logs.go:276] 1 containers: [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56]
	I0920 19:29:13.941573  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:13.945504  720494 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:29:13.945587  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:29:13.986871  720494 cri.go:89] found id: "057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:29:13.986894  720494 cri.go:89] found id: ""
	I0920 19:29:13.986902  720494 logs.go:276] 1 containers: [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78]
	I0920 19:29:13.986962  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:13.990974  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:29:13.991061  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:29:14.034050  720494 cri.go:89] found id: "4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:29:14.034071  720494 cri.go:89] found id: ""
	I0920 19:29:14.034078  720494 logs.go:276] 1 containers: [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232]
	I0920 19:29:14.034141  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:14.038040  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:29:14.038128  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:29:14.081852  720494 cri.go:89] found id: "f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:29:14.081874  720494 cri.go:89] found id: ""
	I0920 19:29:14.081883  720494 logs.go:276] 1 containers: [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba]
	I0920 19:29:14.081944  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:14.085846  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:29:14.085928  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:29:14.133064  720494 cri.go:89] found id: "be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:29:14.133089  720494 cri.go:89] found id: ""
	I0920 19:29:14.133098  720494 logs.go:276] 1 containers: [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8]
	I0920 19:29:14.133162  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:14.136964  720494 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:29:14.137069  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:29:14.177123  720494 cri.go:89] found id: "4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:29:14.177146  720494 cri.go:89] found id: ""
	I0920 19:29:14.177155  720494 logs.go:276] 1 containers: [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1]
	I0920 19:29:14.177213  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:14.180998  720494 logs.go:123] Gathering logs for kube-controller-manager [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8] ...
	I0920 19:29:14.181035  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:29:14.260229  720494 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:29:14.260265  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:29:14.378494  720494 logs.go:123] Gathering logs for etcd [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56] ...
	I0920 19:29:14.378538  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:29:14.437059  720494 logs.go:123] Gathering logs for coredns [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78] ...
	I0920 19:29:14.437092  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:29:14.489260  720494 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:29:14.489292  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:29:14.630069  720494 logs.go:123] Gathering logs for kube-apiserver [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb] ...
	I0920 19:29:14.630100  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:29:14.706585  720494 logs.go:123] Gathering logs for kube-scheduler [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232] ...
	I0920 19:29:14.706623  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:29:14.762872  720494 logs.go:123] Gathering logs for kube-proxy [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba] ...
	I0920 19:29:14.762908  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:29:14.812852  720494 logs.go:123] Gathering logs for kindnet [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1] ...
	I0920 19:29:14.812885  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:29:14.865844  720494 logs.go:123] Gathering logs for container status ...
	I0920 19:29:14.865879  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:29:14.923028  720494 logs.go:123] Gathering logs for kubelet ...
	I0920 19:29:14.923065  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 19:29:14.957088  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:14.957339  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:29:14.957537  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:14.957775  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:29:15.020892  720494 logs.go:123] Gathering logs for dmesg ...
	I0920 19:29:15.020998  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:29:15.055155  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:29:15.055266  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 19:29:15.055358  720494 out.go:270] X Problems detected in kubelet:
	W0920 19:29:15.055399  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:15.055452  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:29:15.055502  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:15.055543  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:29:15.055594  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:29:15.055620  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:29:25.080014  720494 system_pods.go:59] 18 kube-system pods found
	I0920 19:29:25.080090  720494 system_pods.go:61] "coredns-7c65d6cfc9-22l55" [f57f469f-0a10-4755-8ba7-7313badf3e97] Running
	I0920 19:29:25.080099  720494 system_pods.go:61] "csi-hostpath-attacher-0" [ede42a9c-57cd-4862-a473-bb89ae43f460] Running
	I0920 19:29:25.080104  720494 system_pods.go:61] "csi-hostpath-resizer-0" [e16bf395-29bf-4855-9bc2-e53e3fa612e9] Running
	I0920 19:29:25.080109  720494 system_pods.go:61] "csi-hostpathplugin-l9l66" [e3c46cb7-cf62-418b-8b71-c758942cced2] Running
	I0920 19:29:25.080113  720494 system_pods.go:61] "etcd-addons-244316" [c4f43849-20a5-4644-a084-aec2f01202e7] Running
	I0920 19:29:25.080249  720494 system_pods.go:61] "kindnet-62dj5" [0cef216d-8448-40df-9149-c124400377d6] Running
	I0920 19:29:25.080257  720494 system_pods.go:61] "kube-apiserver-addons-244316" [c65c8858-0a0f-424e-8135-ee436e4010d3] Running
	I0920 19:29:25.080267  720494 system_pods.go:61] "kube-controller-manager-addons-244316" [6abd01ee-fed9-4a26-8c01-19cd3b5e4d53] Running
	I0920 19:29:25.080281  720494 system_pods.go:61] "kube-ingress-dns-minikube" [d7af063e-bdd0-4bcb-916b-81ed6229b4e4] Running
	I0920 19:29:25.080286  720494 system_pods.go:61] "kube-proxy-2cdvm" [dc16595e-687e-4af7-a65b-bd9a28c49509] Running
	I0920 19:29:25.080327  720494 system_pods.go:61] "kube-scheduler-addons-244316" [f7b9623c-f0ee-4360-8f31-d3cd8cf88969] Running
	I0920 19:29:25.080346  720494 system_pods.go:61] "metrics-server-84c5f94fbc-zn5jl" [5ca001ce-a4b6-4954-bd42-f372e2f387fb] Running
	I0920 19:29:25.080381  720494 system_pods.go:61] "nvidia-device-plugin-daemonset-n79hn" [be19954c-2529-4f25-bd06-6dde36d7e9e8] Running
	I0920 19:29:25.080420  720494 system_pods.go:61] "registry-66c9cd494c-2gc7z" [c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97] Running
	I0920 19:29:25.080425  720494 system_pods.go:61] "registry-proxy-tbwxh" [6bb565a3-2192-4ce8-8582-11f1d9d8ec42] Running
	I0920 19:29:25.080430  720494 system_pods.go:61] "snapshot-controller-56fcc65765-7jw7t" [b10da70d-f5dd-46eb-993d-4973a5ac3e17] Running
	I0920 19:29:25.080456  720494 system_pods.go:61] "snapshot-controller-56fcc65765-xv9vm" [a58d3b2e-0d8e-4062-9b71-a472fa7e2fa8] Running
	I0920 19:29:25.080499  720494 system_pods.go:61] "storage-provisioner" [4ec9c5b1-c429-45cd-bc2c-9563f0f898d3] Running
	I0920 19:29:25.080507  720494 system_pods.go:74] duration metric: took 11.226821637s to wait for pod list to return data ...
	I0920 19:29:25.080520  720494 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:29:25.084074  720494 default_sa.go:45] found service account: "default"
	I0920 19:29:25.084118  720494 default_sa.go:55] duration metric: took 3.588373ms for default service account to be created ...
	I0920 19:29:25.084130  720494 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:29:25.098531  720494 system_pods.go:86] 18 kube-system pods found
	I0920 19:29:25.098691  720494 system_pods.go:89] "coredns-7c65d6cfc9-22l55" [f57f469f-0a10-4755-8ba7-7313badf3e97] Running
	I0920 19:29:25.098718  720494 system_pods.go:89] "csi-hostpath-attacher-0" [ede42a9c-57cd-4862-a473-bb89ae43f460] Running
	I0920 19:29:25.098741  720494 system_pods.go:89] "csi-hostpath-resizer-0" [e16bf395-29bf-4855-9bc2-e53e3fa612e9] Running
	I0920 19:29:25.098764  720494 system_pods.go:89] "csi-hostpathplugin-l9l66" [e3c46cb7-cf62-418b-8b71-c758942cced2] Running
	I0920 19:29:25.098787  720494 system_pods.go:89] "etcd-addons-244316" [c4f43849-20a5-4644-a084-aec2f01202e7] Running
	I0920 19:29:25.098799  720494 system_pods.go:89] "kindnet-62dj5" [0cef216d-8448-40df-9149-c124400377d6] Running
	I0920 19:29:25.098808  720494 system_pods.go:89] "kube-apiserver-addons-244316" [c65c8858-0a0f-424e-8135-ee436e4010d3] Running
	I0920 19:29:25.098814  720494 system_pods.go:89] "kube-controller-manager-addons-244316" [6abd01ee-fed9-4a26-8c01-19cd3b5e4d53] Running
	I0920 19:29:25.098820  720494 system_pods.go:89] "kube-ingress-dns-minikube" [d7af063e-bdd0-4bcb-916b-81ed6229b4e4] Running
	I0920 19:29:25.098824  720494 system_pods.go:89] "kube-proxy-2cdvm" [dc16595e-687e-4af7-a65b-bd9a28c49509] Running
	I0920 19:29:25.098829  720494 system_pods.go:89] "kube-scheduler-addons-244316" [f7b9623c-f0ee-4360-8f31-d3cd8cf88969] Running
	I0920 19:29:25.098833  720494 system_pods.go:89] "metrics-server-84c5f94fbc-zn5jl" [5ca001ce-a4b6-4954-bd42-f372e2f387fb] Running
	I0920 19:29:25.098839  720494 system_pods.go:89] "nvidia-device-plugin-daemonset-n79hn" [be19954c-2529-4f25-bd06-6dde36d7e9e8] Running
	I0920 19:29:25.098847  720494 system_pods.go:89] "registry-66c9cd494c-2gc7z" [c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97] Running
	I0920 19:29:25.098851  720494 system_pods.go:89] "registry-proxy-tbwxh" [6bb565a3-2192-4ce8-8582-11f1d9d8ec42] Running
	I0920 19:29:25.098858  720494 system_pods.go:89] "snapshot-controller-56fcc65765-7jw7t" [b10da70d-f5dd-46eb-993d-4973a5ac3e17] Running
	I0920 19:29:25.098862  720494 system_pods.go:89] "snapshot-controller-56fcc65765-xv9vm" [a58d3b2e-0d8e-4062-9b71-a472fa7e2fa8] Running
	I0920 19:29:25.098869  720494 system_pods.go:89] "storage-provisioner" [4ec9c5b1-c429-45cd-bc2c-9563f0f898d3] Running
	I0920 19:29:25.098878  720494 system_pods.go:126] duration metric: took 14.740845ms to wait for k8s-apps to be running ...
	I0920 19:29:25.098891  720494 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:29:25.098960  720494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:29:25.113514  720494 system_svc.go:56] duration metric: took 14.611289ms WaitForService to wait for kubelet
	I0920 19:29:25.113546  720494 kubeadm.go:582] duration metric: took 3m0.379953199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:29:25.113573  720494 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:29:25.118070  720494 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 19:29:25.118139  720494 node_conditions.go:123] node cpu capacity is 2
	I0920 19:29:25.118151  720494 node_conditions.go:105] duration metric: took 4.571143ms to run NodePressure ...
	I0920 19:29:25.118164  720494 start.go:241] waiting for startup goroutines ...
	I0920 19:29:25.118172  720494 start.go:246] waiting for cluster config update ...
	I0920 19:29:25.118187  720494 start.go:255] writing updated cluster config ...
	I0920 19:29:25.118506  720494 ssh_runner.go:195] Run: rm -f paused
	I0920 19:29:25.476128  720494 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:29:25.479298  720494 out.go:177] * Done! kubectl is now configured to use "addons-244316" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.010542082Z" level=info msg="Stopped pod sandbox: 865406ed79da61ba48fd7df3f764827a7c4a420046e3818e724e46d8348ee0fb" id=43af872b-7f1c-40fd-b6b9-0ac85f3f560a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.040643310Z" level=info msg="Removing container: 727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de" id=fcac8960-8614-413c-809d-a9b64b31f8db name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.047734848Z" level=info msg="Removing container: db8907b6eab1c76f10459e229f8cc23cf81881bfddd1db506a18c041ad04d890" id=6a9b99d9-d4bb-4816-b972-0d41c4aa30d8 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.062001016Z" level=info msg="Removed container 727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de: ingress-nginx/ingress-nginx-controller-bc57996ff-vstxr/controller" id=fcac8960-8614-413c-809d-a9b64b31f8db name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.080292996Z" level=info msg="Removed container db8907b6eab1c76f10459e229f8cc23cf81881bfddd1db506a18c041ad04d890: ingress-nginx/ingress-nginx-admission-create-tpm65/create" id=6a9b99d9-d4bb-4816-b972-0d41c4aa30d8 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.082864630Z" level=info msg="Removing container: a616675d627b4e5b5a399852f1041f9439e048163c14d413fa74347856aa8293" id=e40b20c8-728d-492e-8ca1-fd27794f03ee name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.108149090Z" level=info msg="Removed container a616675d627b4e5b5a399852f1041f9439e048163c14d413fa74347856aa8293: ingress-nginx/ingress-nginx-admission-patch-r8q44/patch" id=e40b20c8-728d-492e-8ca1-fd27794f03ee name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.110013023Z" level=info msg="Stopping pod sandbox: 865406ed79da61ba48fd7df3f764827a7c4a420046e3818e724e46d8348ee0fb" id=dbef7405-3900-498f-a3ec-3451779508c1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.110081936Z" level=info msg="Stopped pod sandbox (already stopped): 865406ed79da61ba48fd7df3f764827a7c4a420046e3818e724e46d8348ee0fb" id=dbef7405-3900-498f-a3ec-3451779508c1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.110793739Z" level=info msg="Removing pod sandbox: 865406ed79da61ba48fd7df3f764827a7c4a420046e3818e724e46d8348ee0fb" id=d0a6852f-317d-461c-bb5e-802160baadc7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.123220401Z" level=info msg="Removed pod sandbox: 865406ed79da61ba48fd7df3f764827a7c4a420046e3818e724e46d8348ee0fb" id=d0a6852f-317d-461c-bb5e-802160baadc7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.123822168Z" level=info msg="Stopping pod sandbox: 8491d3e8473d72bc5c8fd451f0b87849674d98f206694cde901eca3e290e209c" id=9d9182af-f6a9-4cd1-9944-ec6aaa74530b name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.123859205Z" level=info msg="Stopped pod sandbox (already stopped): 8491d3e8473d72bc5c8fd451f0b87849674d98f206694cde901eca3e290e209c" id=9d9182af-f6a9-4cd1-9944-ec6aaa74530b name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.124193195Z" level=info msg="Removing pod sandbox: 8491d3e8473d72bc5c8fd451f0b87849674d98f206694cde901eca3e290e209c" id=0749cd0d-361c-406d-ae71-53d2db848820 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.135530191Z" level=info msg="Removed pod sandbox: 8491d3e8473d72bc5c8fd451f0b87849674d98f206694cde901eca3e290e209c" id=0749cd0d-361c-406d-ae71-53d2db848820 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.136026828Z" level=info msg="Stopping pod sandbox: c9c4141d93287a1cf61566af811ef73cc57c44bfebf44c6abadcb155f9d2e994" id=e81e842d-e398-42f7-bb30-eac2796640e2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.136063578Z" level=info msg="Stopped pod sandbox (already stopped): c9c4141d93287a1cf61566af811ef73cc57c44bfebf44c6abadcb155f9d2e994" id=e81e842d-e398-42f7-bb30-eac2796640e2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.136351703Z" level=info msg="Removing pod sandbox: c9c4141d93287a1cf61566af811ef73cc57c44bfebf44c6abadcb155f9d2e994" id=e86fb261-cac7-4b49-aa1b-cc713c0f0a52 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.147165757Z" level=info msg="Removed pod sandbox: c9c4141d93287a1cf61566af811ef73cc57c44bfebf44c6abadcb155f9d2e994" id=e86fb261-cac7-4b49-aa1b-cc713c0f0a52 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.147645623Z" level=info msg="Stopping pod sandbox: 889ee343b1c15bea326e9bfc0702bd1d5bfe5286cfe8a3baf1d9ad9640cf576f" id=d08ebe13-405c-404a-b6c5-13ce1fb26553 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.147681840Z" level=info msg="Stopped pod sandbox (already stopped): 889ee343b1c15bea326e9bfc0702bd1d5bfe5286cfe8a3baf1d9ad9640cf576f" id=d08ebe13-405c-404a-b6c5-13ce1fb26553 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.148047805Z" level=info msg="Removing pod sandbox: 889ee343b1c15bea326e9bfc0702bd1d5bfe5286cfe8a3baf1d9ad9640cf576f" id=e1487850-6afd-4e1b-89cc-e7c94b225453 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 19:41:21 addons-244316 crio[966]: time="2024-09-20 19:41:21.164230853Z" level=info msg="Removed pod sandbox: 889ee343b1c15bea326e9bfc0702bd1d5bfe5286cfe8a3baf1d9ad9640cf576f" id=e1487850-6afd-4e1b-89cc-e7c94b225453 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 19:41:22 addons-244316 crio[966]: time="2024-09-20 19:41:22.568631013Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=44a65a70-dc2d-422b-8f73-9bed045ab586 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:41:22 addons-244316 crio[966]: time="2024-09-20 19:41:22.568897781Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=44a65a70-dc2d-422b-8f73-9bed045ab586 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	c0e6f6aec9e54       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   9 seconds ago       Running             hello-world-app            0                   b79e0f239ddfc       hello-world-app-55bf9c44b4-mg7cv
	6e0b4a9414739       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                         2 minutes ago       Running             nginx                      0                   dc3e5360e6ba0       nginx
	d315e9086557b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            13 minutes ago      Running             gcp-auth                   0                   c6f74f4e64606       gcp-auth-89d5ffd79-d2tpp
	37386e7680939       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        13 minutes ago      Running             local-path-provisioner     0                   cc709ac8a6230       local-path-provisioner-86d989889c-fzcjl
	fad6a3dc1df6d       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                13 minutes ago      Running             nvidia-device-plugin-ctr   0                   0e878388421b6       nvidia-device-plugin-daemonset-n79hn
	fe7b8006fe5da       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   13 minutes ago      Running             metrics-server             0                   22cef3edf18b3       metrics-server-84c5f94fbc-zn5jl
	c9915246bb266       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                         13 minutes ago      Running             yakd                       0                   ccb50425f1dc8       yakd-dashboard-67d98fc6b-pb547
	22be32730e544       gcr.io/cloud-spanner-emulator/emulator@sha256:41ec188288c7943f488600462b2b74002814e52439be82d15de33c3ee4898a58          14 minutes ago      Running             cloud-spanner-emulator     0                   7ac06d97a266b       cloud-spanner-emulator-769b77f747-dp6lg
	c524ed738a8d3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        14 minutes ago      Running             storage-provisioner        0                   780e530cacd2f       storage-provisioner
	057fc4f7aad90       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        14 minutes ago      Running             coredns                    0                   19fae96941a4c       coredns-7c65d6cfc9-22l55
	f693c5f3d507b       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        14 minutes ago      Running             kube-proxy                 0                   cc62ba102a745       kube-proxy-2cdvm
	4321d12c79ddf       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                        14 minutes ago      Running             kindnet-cni                0                   b90c147beb0ad       kindnet-62dj5
	4d724338eea34       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        15 minutes ago      Running             kube-scheduler             0                   05db024319aa0       kube-scheduler-addons-244316
	be05ccc3ccb37       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        15 minutes ago      Running             kube-controller-manager    0                   6b477bdf2c558       kube-controller-manager-addons-244316
	7df0e0b9e62ff       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        15 minutes ago      Running             kube-apiserver             0                   ac32244a5406b       kube-apiserver-addons-244316
	a6f3359b2e88b       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        15 minutes ago      Running             etcd                       0                   1c0ae6d7145c8       etcd-addons-244316
	
	
	==> coredns [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78] <==
	[INFO] 10.244.0.15:59114 - 56514 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104129s
	[INFO] 10.244.0.15:50932 - 11460 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002666137s
	[INFO] 10.244.0.15:50932 - 39626 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002906394s
	[INFO] 10.244.0.15:33525 - 33323 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000546449s
	[INFO] 10.244.0.15:33525 - 45348 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000577833s
	[INFO] 10.244.0.15:59699 - 23607 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00012291s
	[INFO] 10.244.0.15:59699 - 54075 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000179384s
	[INFO] 10.244.0.15:32831 - 28558 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072893s
	[INFO] 10.244.0.15:32831 - 18096 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000140543s
	[INFO] 10.244.0.15:45505 - 40088 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101889s
	[INFO] 10.244.0.15:45505 - 32415 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000152037s
	[INFO] 10.244.0.15:34547 - 57598 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001603466s
	[INFO] 10.244.0.15:34547 - 49347 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001676244s
	[INFO] 10.244.0.15:39827 - 28188 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076618s
	[INFO] 10.244.0.15:39827 - 45592 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000050157s
	[INFO] 10.244.0.20:47707 - 16074 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002686658s
	[INFO] 10.244.0.20:46427 - 23800 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00268235s
	[INFO] 10.244.0.20:57231 - 19877 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157937s
	[INFO] 10.244.0.20:45688 - 62216 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000097565s
	[INFO] 10.244.0.20:33274 - 51885 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125626s
	[INFO] 10.244.0.20:49302 - 18918 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092175s
	[INFO] 10.244.0.20:49895 - 24635 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00244512s
	[INFO] 10.244.0.20:44018 - 55406 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002061548s
	[INFO] 10.244.0.20:38373 - 29636 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000956351s
	[INFO] 10.244.0.20:33201 - 4012 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000749012s
	
	
	==> describe nodes <==
	Name:               addons-244316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-244316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-244316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T19_26_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-244316
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:26:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-244316
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:41:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:39:25 +0000   Fri, 20 Sep 2024 19:26:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:39:25 +0000   Fri, 20 Sep 2024 19:26:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:39:25 +0000   Fri, 20 Sep 2024 19:26:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:39:25 +0000   Fri, 20 Sep 2024 19:27:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-244316
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 545b19fe9bdc45b392d49f2b91832698
	  System UUID:                ef4c1a4b-0c08-44ed-8fa8-b5206cbb0701
	  Boot ID:                    7d682649-b07c-44b5-a0a6-3c50df538ea4
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     cloud-spanner-emulator-769b77f747-dp6lg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-mg7cv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  gcp-auth                    gcp-auth-89d5ffd79-d2tpp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-22l55                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-addons-244316                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-62dj5                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-244316               250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-244316      200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-2cdvm                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-244316               100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-zn5jl            100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         14m
	  kube-system                 nvidia-device-plugin-daemonset-n79hn       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-fzcjl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-pb547             0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             548Mi (6%)  476Mi (6%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 14m   kube-proxy       
	  Normal   Starting                 15m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  15m   kubelet          Node addons-244316 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m   kubelet          Node addons-244316 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m   kubelet          Node addons-244316 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m   node-controller  Node addons-244316 event: Registered Node addons-244316 in Controller
	  Normal   NodeReady                14m   kubelet          Node addons-244316 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 18:56] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	[Sep20 19:09] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	[Sep20 19:16] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	
	
	==> etcd [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56] <==
	{"level":"info","ts":"2024-09-20T19:26:15.496978Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:26:15.497337Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:26:15.498258Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:26:15.499377Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T19:26:15.499863Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:26:15.499980Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:26:15.500310Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:26:15.500373Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:26:15.505603Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:26:15.506814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T19:26:15.513007Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:26:15.513100Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T19:26:26.241888Z","caller":"traceutil/trace.go:171","msg":"trace[2026199184] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"100.101501ms","start":"2024-09-20T19:26:26.141768Z","end":"2024-09-20T19:26:26.241870Z","steps":["trace[2026199184] 'process raft request'  (duration: 39.701056ms)","trace[2026199184] 'compare'  (duration: 60.307606ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T19:26:27.471528Z","caller":"traceutil/trace.go:171","msg":"trace[1882640504] transaction","detail":"{read_only:false; response_revision:313; number_of_response:1; }","duration":"106.638225ms","start":"2024-09-20T19:26:27.364872Z","end":"2024-09-20T19:26:27.471511Z","steps":["trace[1882640504] 'process raft request'  (duration: 59.29789ms)","trace[1882640504] 'compare'  (duration: 46.971064ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T19:26:27.526046Z","caller":"traceutil/trace.go:171","msg":"trace[973402408] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"107.82845ms","start":"2024-09-20T19:26:27.418197Z","end":"2024-09-20T19:26:27.526026Z","steps":["trace[973402408] 'process raft request'  (duration: 53.066249ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:26:27.571075Z","caller":"traceutil/trace.go:171","msg":"trace[2112418004] transaction","detail":"{read_only:false; response_revision:315; number_of_response:1; }","duration":"122.751557ms","start":"2024-09-20T19:26:27.448305Z","end":"2024-09-20T19:26:27.571056Z","steps":["trace[2112418004] 'process raft request'  (duration: 119.526598ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:26:28.500251Z","caller":"traceutil/trace.go:171","msg":"trace[95206656] transaction","detail":"{read_only:false; response_revision:323; number_of_response:1; }","duration":"174.74035ms","start":"2024-09-20T19:26:28.325306Z","end":"2024-09-20T19:26:28.500046Z","steps":["trace[95206656] 'process raft request'  (duration: 120.086848ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:26:28.701213Z","caller":"traceutil/trace.go:171","msg":"trace[792115432] transaction","detail":"{read_only:false; response_revision:324; number_of_response:1; }","duration":"135.97743ms","start":"2024-09-20T19:26:28.565220Z","end":"2024-09-20T19:26:28.701197Z","steps":["trace[792115432] 'process raft request'  (duration: 135.866171ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:26:28.721747Z","caller":"traceutil/trace.go:171","msg":"trace[634577408] transaction","detail":"{read_only:false; response_revision:325; number_of_response:1; }","duration":"148.304093ms","start":"2024-09-20T19:26:28.573386Z","end":"2024-09-20T19:26:28.721690Z","steps":["trace[634577408] 'process raft request'  (duration: 147.53831ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:36:15.774535Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1499}
	{"level":"info","ts":"2024-09-20T19:36:15.811553Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1499,"took":"36.293092ms","hash":1427089083,"current-db-size-bytes":6217728,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3289088,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-20T19:36:15.811629Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1427089083,"revision":1499,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T19:41:15.781293Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1917}
	{"level":"info","ts":"2024-09-20T19:41:15.800026Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1917,"took":"18.222452ms","hash":2481227134,"current-db-size-bytes":6217728,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":4194304,"current-db-size-in-use":"4.2 MB"}
	{"level":"info","ts":"2024-09-20T19:41:15.800387Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2481227134,"revision":1917,"compact-revision":1499}
	
	
	==> gcp-auth [d315e9086557bcb438ba82c9c8029a5fa6eb5ca36d005581c58a6149197ccc08] <==
	2024/09/20 19:28:07 GCP Auth Webhook started!
	2024/09/20 19:29:25 Ready to marshal response ...
	2024/09/20 19:29:25 Ready to write response ...
	2024/09/20 19:29:25 Ready to marshal response ...
	2024/09/20 19:29:25 Ready to write response ...
	2024/09/20 19:29:25 Ready to marshal response ...
	2024/09/20 19:29:25 Ready to write response ...
	2024/09/20 19:37:30 Ready to marshal response ...
	2024/09/20 19:37:30 Ready to write response ...
	2024/09/20 19:37:30 Ready to marshal response ...
	2024/09/20 19:37:30 Ready to write response ...
	2024/09/20 19:37:30 Ready to marshal response ...
	2024/09/20 19:37:30 Ready to write response ...
	2024/09/20 19:37:39 Ready to marshal response ...
	2024/09/20 19:37:39 Ready to write response ...
	2024/09/20 19:38:07 Ready to marshal response ...
	2024/09/20 19:38:07 Ready to write response ...
	2024/09/20 19:38:22 Ready to marshal response ...
	2024/09/20 19:38:22 Ready to write response ...
	2024/09/20 19:38:54 Ready to marshal response ...
	2024/09/20 19:38:54 Ready to write response ...
	2024/09/20 19:41:15 Ready to marshal response ...
	2024/09/20 19:41:15 Ready to write response ...
	
	
	==> kernel <==
	 19:41:26 up  3:23,  0 users,  load average: 0.42, 0.60, 1.56
	Linux addons-244316 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1] <==
	I0920 19:39:20.053826       1 main.go:299] handling current node
	I0920 19:39:30.049814       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:39:30.049866       1 main.go:299] handling current node
	I0920 19:39:40.049753       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:39:40.049929       1 main.go:299] handling current node
	I0920 19:39:50.052869       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:39:50.052919       1 main.go:299] handling current node
	I0920 19:40:00.053429       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:40:00.053560       1 main.go:299] handling current node
	I0920 19:40:10.051058       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:40:10.051102       1 main.go:299] handling current node
	I0920 19:40:20.050403       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:40:20.050662       1 main.go:299] handling current node
	I0920 19:40:30.050565       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:40:30.050612       1 main.go:299] handling current node
	I0920 19:40:40.049916       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:40:40.049959       1 main.go:299] handling current node
	I0920 19:40:50.057692       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:40:50.057821       1 main.go:299] handling current node
	I0920 19:41:00.049756       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:41:00.049889       1 main.go:299] handling current node
	I0920 19:41:10.056531       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:41:10.056567       1 main.go:299] handling current node
	I0920 19:41:20.049785       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:41:20.049828       1 main.go:299] handling current node
	
	
	==> kube-apiserver [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb] <==
	 > logger="UnhandledError"
	E0920 19:28:51.046775       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.3.33:443: connect: connection refused" logger="UnhandledError"
	E0920 19:28:51.048849       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.3.33:443: connect: connection refused" logger="UnhandledError"
	E0920 19:28:51.053974       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.3.33:443: connect: connection refused" logger="UnhandledError"
	I0920 19:28:51.140284       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 19:37:30.422760       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.183.123"}
	I0920 19:38:18.972262       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 19:38:38.722956       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.723050       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:38:38.790058       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.790113       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:38:38.821684       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.822017       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:38:38.825093       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.825209       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:38:38.857263       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.857387       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 19:38:39.823647       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 19:38:39.858807       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 19:38:39.873190       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0920 19:38:49.060313       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 19:38:50.091711       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 19:38:54.713979       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 19:38:55.034820       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.138.158"}
	I0920 19:41:15.565906       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.94.217"}
	
	
	==> kube-controller-manager [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8] <==
	W0920 19:40:00.692430       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:40:00.692480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:40:01.196312       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:40:01.196361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:40:11.246659       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:40:11.246706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:40:42.828562       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:40:42.828614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:40:44.935565       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:40:44.935610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:40:49.975071       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:40:49.975205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:40:54.149235       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:40:54.149280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 19:41:15.297337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="57.734443ms"
	I0920 19:41:15.318352       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="20.806228ms"
	I0920 19:41:15.321356       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="87.448µs"
	I0920 19:41:15.341829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.734µs"
	W0920 19:41:15.694833       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:41:15.694955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 19:41:17.062659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.155883ms"
	I0920 19:41:17.062947       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="62.054µs"
	I0920 19:41:17.780380       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0920 19:41:17.785364       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="4.521µs"
	I0920 19:41:17.787253       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	
	
	==> kube-proxy [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba] <==
	I0920 19:26:30.694959       1 server_linux.go:66] "Using iptables proxy"
	I0920 19:26:30.941347       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 19:26:30.941501       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:26:31.113765       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 19:26:31.114389       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:26:31.247312       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:26:31.248577       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:26:31.248685       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:26:31.285254       1 config.go:199] "Starting service config controller"
	I0920 19:26:31.285292       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:26:31.285317       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:26:31.285321       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:26:31.285701       1 config.go:328] "Starting node config controller"
	I0920 19:26:31.285721       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:26:31.386427       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:26:31.388471       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:26:31.386092       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232] <==
	W0920 19:26:17.922834       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 19:26:17.922851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:17.922893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 19:26:17.922937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:17.922953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0920 19:26:17.923002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 19:26:17.923021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 19:26:17.923044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:17.922914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 19:26:17.923121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:17.923082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 19:26:17.923221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:18.745705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 19:26:18.745830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:18.753046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 19:26:18.753087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:18.822086       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 19:26:18.822126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:18.825544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:26:18.825670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:19.036881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 19:26:19.036999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:19.047421       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 19:26:19.047533       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 19:26:21.817013       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:41:17 addons-244316 kubelet[1514]: E0920 19:41:17.053620    1514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0712afd630c50804eab1d995f603ce6f62534510fd5f0e1cdc38150bdefcb143\": container with ID starting with 0712afd630c50804eab1d995f603ce6f62534510fd5f0e1cdc38150bdefcb143 not found: ID does not exist" containerID="0712afd630c50804eab1d995f603ce6f62534510fd5f0e1cdc38150bdefcb143"
	Sep 20 19:41:17 addons-244316 kubelet[1514]: I0920 19:41:17.053659    1514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0712afd630c50804eab1d995f603ce6f62534510fd5f0e1cdc38150bdefcb143"} err="failed to get container status \"0712afd630c50804eab1d995f603ce6f62534510fd5f0e1cdc38150bdefcb143\": rpc error: code = NotFound desc = could not find container \"0712afd630c50804eab1d995f603ce6f62534510fd5f0e1cdc38150bdefcb143\": container with ID starting with 0712afd630c50804eab1d995f603ce6f62534510fd5f0e1cdc38150bdefcb143 not found: ID does not exist"
	Sep 20 19:41:17 addons-244316 kubelet[1514]: I0920 19:41:17.081650    1514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-mg7cv" podStartSLOduration=1.002242841 podStartE2EDuration="2.081628627s" podCreationTimestamp="2024-09-20 19:41:15 +0000 UTC" firstStartedPulling="2024-09-20 19:41:15.68095868 +0000 UTC m=+895.338006991" lastFinishedPulling="2024-09-20 19:41:16.760344466 +0000 UTC m=+896.417392777" observedRunningTime="2024-09-20 19:41:17.057408061 +0000 UTC m=+896.714456371" watchObservedRunningTime="2024-09-20 19:41:17.081628627 +0000 UTC m=+896.738676938"
	Sep 20 19:41:18 addons-244316 kubelet[1514]: I0920 19:41:18.569766    1514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422bf1d0-501d-496a-bddc-738f621ef540" path="/var/lib/kubelet/pods/422bf1d0-501d-496a-bddc-738f621ef540/volumes"
	Sep 20 19:41:18 addons-244316 kubelet[1514]: I0920 19:41:18.570187    1514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7af063e-bdd0-4bcb-916b-81ed6229b4e4" path="/var/lib/kubelet/pods/d7af063e-bdd0-4bcb-916b-81ed6229b4e4/volumes"
	Sep 20 19:41:18 addons-244316 kubelet[1514]: I0920 19:41:18.570537    1514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3322e17-8748-4ecf-b0aa-8b62a448ba0c" path="/var/lib/kubelet/pods/e3322e17-8748-4ecf-b0aa-8b62a448ba0c/volumes"
	Sep 20 19:41:20 addons-244316 kubelet[1514]: E0920 19:41:20.574852    1514 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7, memory: /docker/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7/system.slice/kubelet.service"
	Sep 20 19:41:20 addons-244316 kubelet[1514]: E0920 19:41:20.905807    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861280905528588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543500,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:41:20 addons-244316 kubelet[1514]: E0920 19:41:20.905845    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861280905528588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543500,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:41:21 addons-244316 kubelet[1514]: I0920 19:41:21.038883    1514 scope.go:117] "RemoveContainer" containerID="727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de"
	Sep 20 19:41:21 addons-244316 kubelet[1514]: I0920 19:41:21.044611    1514 scope.go:117] "RemoveContainer" containerID="db8907b6eab1c76f10459e229f8cc23cf81881bfddd1db506a18c041ad04d890"
	Sep 20 19:41:21 addons-244316 kubelet[1514]: I0920 19:41:21.062364    1514 scope.go:117] "RemoveContainer" containerID="727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de"
	Sep 20 19:41:21 addons-244316 kubelet[1514]: E0920 19:41:21.062766    1514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de\": container with ID starting with 727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de not found: ID does not exist" containerID="727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de"
	Sep 20 19:41:21 addons-244316 kubelet[1514]: I0920 19:41:21.062811    1514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de"} err="failed to get container status \"727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de\": rpc error: code = NotFound desc = could not find container \"727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de\": container with ID starting with 727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de not found: ID does not exist"
	Sep 20 19:41:21 addons-244316 kubelet[1514]: I0920 19:41:21.080822    1514 scope.go:117] "RemoveContainer" containerID="727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de"
	Sep 20 19:41:21 addons-244316 kubelet[1514]: E0920 19:41:21.081410    1514 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de\": rpc error: code = NotFound desc = could not find container \"727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de\": container with ID starting with 727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de not found: ID does not exist" containerID="727a799cbc3b52a33df15a96843451989b07dc4c94b81d4b7f64b25f8390c8de"
	Sep 20 19:41:21 addons-244316 kubelet[1514]: I0920 19:41:21.081462    1514 scope.go:117] "RemoveContainer" containerID="a616675d627b4e5b5a399852f1041f9439e048163c14d413fa74347856aa8293"
	Sep 20 19:41:21 addons-244316 kubelet[1514]: I0920 19:41:21.178664    1514 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/535fbb72-dca5-47eb-9ccd-0e05fd541f07-webhook-cert\") pod \"535fbb72-dca5-47eb-9ccd-0e05fd541f07\" (UID: \"535fbb72-dca5-47eb-9ccd-0e05fd541f07\") "
	Sep 20 19:41:21 addons-244316 kubelet[1514]: I0920 19:41:21.178791    1514 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n82wc\" (UniqueName: \"kubernetes.io/projected/535fbb72-dca5-47eb-9ccd-0e05fd541f07-kube-api-access-n82wc\") pod \"535fbb72-dca5-47eb-9ccd-0e05fd541f07\" (UID: \"535fbb72-dca5-47eb-9ccd-0e05fd541f07\") "
	Sep 20 19:41:21 addons-244316 kubelet[1514]: I0920 19:41:21.181038    1514 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/535fbb72-dca5-47eb-9ccd-0e05fd541f07-kube-api-access-n82wc" (OuterVolumeSpecName: "kube-api-access-n82wc") pod "535fbb72-dca5-47eb-9ccd-0e05fd541f07" (UID: "535fbb72-dca5-47eb-9ccd-0e05fd541f07"). InnerVolumeSpecName "kube-api-access-n82wc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 19:41:21 addons-244316 kubelet[1514]: I0920 19:41:21.181187    1514 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/535fbb72-dca5-47eb-9ccd-0e05fd541f07-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "535fbb72-dca5-47eb-9ccd-0e05fd541f07" (UID: "535fbb72-dca5-47eb-9ccd-0e05fd541f07"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 20 19:41:21 addons-244316 kubelet[1514]: I0920 19:41:21.279698    1514 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/535fbb72-dca5-47eb-9ccd-0e05fd541f07-webhook-cert\") on node \"addons-244316\" DevicePath \"\""
	Sep 20 19:41:21 addons-244316 kubelet[1514]: I0920 19:41:21.279738    1514 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-n82wc\" (UniqueName: \"kubernetes.io/projected/535fbb72-dca5-47eb-9ccd-0e05fd541f07-kube-api-access-n82wc\") on node \"addons-244316\" DevicePath \"\""
	Sep 20 19:41:22 addons-244316 kubelet[1514]: E0920 19:41:22.569117    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="26ebf772-d5b9-4d72-93d5-706cab403777"
	Sep 20 19:41:22 addons-244316 kubelet[1514]: I0920 19:41:22.570307    1514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="535fbb72-dca5-47eb-9ccd-0e05fd541f07" path="/var/lib/kubelet/pods/535fbb72-dca5-47eb-9ccd-0e05fd541f07/volumes"
	
	
	==> storage-provisioner [c524ed738a8d38b9f6bd037c1dc8d7fef60bc2f2cd8fb0f684e4eb386bf75f67] <==
	I0920 19:27:11.555337       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 19:27:11.604952       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 19:27:11.605114       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 19:27:11.637195       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 19:27:11.637419       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-244316_90ec2255-c73c-4224-95cd-667ebf7eeaa4!
	I0920 19:27:11.637476       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"064fe9f7-ba2a-47d4-ac4c-01438c7426a0", APIVersion:"v1", ResourceVersion:"888", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-244316_90ec2255-c73c-4224-95cd-667ebf7eeaa4 became leader
	I0920 19:27:11.737871       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-244316_90ec2255-c73c-4224-95cd-667ebf7eeaa4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-244316 -n addons-244316
helpers_test.go:261: (dbg) Run:  kubectl --context addons-244316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-244316 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-244316 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-244316/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 19:29:25 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x65mx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x65mx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  12m                  default-scheduler  Successfully assigned default/busybox to addons-244316
	  Normal   Pulling    10m (x4 over 12m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 12m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 12m)    kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 12m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    115s (x42 over 12m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (331.1s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 7.599014ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-zn5jl" [5ca001ce-a4b6-4954-bd42-f372e2f387fb] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.008056449s
addons_test.go:413: (dbg) Run:  kubectl --context addons-244316 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-244316 top pods -n kube-system: exit status 1 (161.514251ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-22l55, age: 12m17.191235664s

                                                
                                                
** /stderr **
I0920 19:38:45.195862  719734 retry.go:31] will retry after 3.908928003s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-244316 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-244316 top pods -n kube-system: exit status 1 (112.27999ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-22l55, age: 12m21.215417585s

                                                
                                                
** /stderr **
I0920 19:38:49.218280  719734 retry.go:31] will retry after 2.606609504s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-244316 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-244316 top pods -n kube-system: exit status 1 (99.928068ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-22l55, age: 12m23.922214924s

                                                
                                                
** /stderr **
I0920 19:38:51.925159  719734 retry.go:31] will retry after 7.425473982s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-244316 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-244316 top pods -n kube-system: exit status 1 (100.157625ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-22l55, age: 12m31.448598575s

                                                
                                                
** /stderr **
I0920 19:38:59.451675  719734 retry.go:31] will retry after 14.550505063s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-244316 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-244316 top pods -n kube-system: exit status 1 (108.652824ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-22l55, age: 12m46.108133885s

                                                
                                                
** /stderr **
I0920 19:39:14.111213  719734 retry.go:31] will retry after 21.797749455s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-244316 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-244316 top pods -n kube-system: exit status 1 (91.238102ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-22l55, age: 13m7.996844221s

                                                
                                                
** /stderr **
I0920 19:39:36.000643  719734 retry.go:31] will retry after 14.853267349s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-244316 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-244316 top pods -n kube-system: exit status 1 (98.610447ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-22l55, age: 13m22.949478321s

                                                
                                                
** /stderr **
I0920 19:39:50.953104  719734 retry.go:31] will retry after 37.039524538s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-244316 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-244316 top pods -n kube-system: exit status 1 (102.397708ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-22l55, age: 14m0.092225904s

                                                
                                                
** /stderr **
I0920 19:40:28.095411  719734 retry.go:31] will retry after 1m16.578930888s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-244316 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-244316 top pods -n kube-system: exit status 1 (91.25554ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-22l55, age: 15m16.7629399s

                                                
                                                
** /stderr **
I0920 19:41:44.766986  719734 retry.go:31] will retry after 1m3.384687853s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-244316 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-244316 top pods -n kube-system: exit status 1 (89.676606ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-22l55, age: 16m20.239389794s

                                                
                                                
** /stderr **
I0920 19:42:48.243294  719734 retry.go:31] will retry after 44.479629504s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-244316 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-244316 top pods -n kube-system: exit status 1 (91.652088ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-22l55, age: 17m4.810902455s

                                                
                                                
** /stderr **
I0920 19:43:32.815544  719734 retry.go:31] will retry after 33.61471749s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-244316 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-244316 top pods -n kube-system: exit status 1 (94.997061ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-22l55, age: 17m38.522145692s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-244316
helpers_test.go:235: (dbg) docker inspect addons-244316:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7",
	        "Created": "2024-09-20T19:25:55.126788858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 720989,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T19:25:55.300608812Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7/hosts",
	        "LogPath": "/var/lib/docker/containers/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7/3d82610f1fe47853e4dee755c91adcdde78a45fdc903225d2e20cbb7f123faf7-json.log",
	        "Name": "/addons-244316",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-244316:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-244316",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/105335214c4d900a78658ce80448d8e1b3a6ae42f7a4bc31c9c402b03cc84f4b-init/diff:/var/lib/docker/overlay2/abb52e4f5a7bf897f28cf92e83fcbaaa3eeab65622f14fe44da11027a9deb44f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/105335214c4d900a78658ce80448d8e1b3a6ae42f7a4bc31c9c402b03cc84f4b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/105335214c4d900a78658ce80448d8e1b3a6ae42f7a4bc31c9c402b03cc84f4b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/105335214c4d900a78658ce80448d8e1b3a6ae42f7a4bc31c9c402b03cc84f4b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-244316",
	                "Source": "/var/lib/docker/volumes/addons-244316/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-244316",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-244316",
	                "name.minikube.sigs.k8s.io": "addons-244316",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f3d1276f3986829b7ef05a9018d68f3626ebc86f1f53155e972dab26ef3188f",
	            "SandboxKey": "/var/run/docker/netns/3f3d1276f398",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-244316": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8bb19f13f00a01d1da94938835d45e58571681a0667d77334eb4d48ebd8f6ef5",
	                    "EndpointID": "84f0a8ea26206e832205d2bb50a56b3db3dc2ad8c485969f2e47f1627577b1a0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-244316",
	                        "3d82610f1fe4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-244316 -n addons-244316
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-244316 logs -n 25: (1.697315832s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-484642                                                                     | download-only-484642   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| start   | --download-only -p                                                                          | download-docker-394536 | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | download-docker-394536                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-394536                                                                   | download-docker-394536 | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-387387   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | binary-mirror-387387                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34931                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-387387                                                                     | binary-mirror-387387   | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| addons  | enable dashboard -p                                                                         | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | addons-244316                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | addons-244316                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-244316 --wait=true                                                                | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:37 UTC | 20 Sep 24 19:37 UTC |
	|         | -p addons-244316                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-244316 addons disable                                                                | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:37 UTC | 20 Sep 24 19:37 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-244316 addons                                                                        | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-244316 addons                                                                        | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-244316 ip                                                                            | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	| addons  | addons-244316 addons disable                                                                | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:38 UTC | 20 Sep 24 19:38 UTC |
	|         | addons-244316                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-244316 ssh curl -s                                                                   | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:39 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-244316 ip                                                                            | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:41 UTC | 20 Sep 24 19:41 UTC |
	| addons  | addons-244316 addons disable                                                                | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:41 UTC | 20 Sep 24 19:41 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-244316 addons disable                                                                | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:41 UTC | 20 Sep 24 19:41 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ssh     | addons-244316 ssh cat                                                                       | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:41 UTC | 20 Sep 24 19:41 UTC |
	|         | /opt/local-path-provisioner/pvc-e1313a3d-b51a-462f-b9f3-00a0a6f9bc14_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-244316 addons disable                                                                | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:41 UTC | 20 Sep 24 19:41 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-244316 addons disable                                                                | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:41 UTC | 20 Sep 24 19:41 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:41 UTC | 20 Sep 24 19:41 UTC |
	|         | -p addons-244316                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:42 UTC | 20 Sep 24 19:42 UTC |
	|         | addons-244316                                                                               |                        |         |         |                     |                     |
	| addons  | addons-244316 addons                                                                        | addons-244316          | jenkins | v1.34.0 | 20 Sep 24 19:44 UTC | 20 Sep 24 19:44 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:25:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:25:29.773517  720494 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:25:29.773681  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:29.773717  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:25:29.773723  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:29.774046  720494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
	I0920 19:25:29.774682  720494 out.go:352] Setting JSON to false
	I0920 19:25:29.775868  720494 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":11279,"bootTime":1726849051,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 19:25:29.775943  720494 start.go:139] virtualization:  
	I0920 19:25:29.779178  720494 out.go:177] * [addons-244316] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:25:29.782484  720494 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:25:29.782599  720494 notify.go:220] Checking for updates...
	I0920 19:25:29.787949  720494 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:25:29.791244  720494 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:25:29.793888  720494 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	I0920 19:25:29.796579  720494 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:25:29.799156  720494 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:25:29.802100  720494 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:25:29.830398  720494 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:25:29.830533  720494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:25:29.884753  720494 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:25:29.875307304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:25:29.884872  720494 docker.go:318] overlay module found
	I0920 19:25:29.887812  720494 out.go:177] * Using the docker driver based on user configuration
	I0920 19:25:29.890512  720494 start.go:297] selected driver: docker
	I0920 19:25:29.890532  720494 start.go:901] validating driver "docker" against <nil>
	I0920 19:25:29.890547  720494 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:25:29.891202  720494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:25:29.946608  720494 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:25:29.93724064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:25:29.946823  720494 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 19:25:29.947062  720494 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:25:29.949812  720494 out.go:177] * Using Docker driver with root privileges
	I0920 19:25:29.952570  720494 cni.go:84] Creating CNI manager for ""
	I0920 19:25:29.952644  720494 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 19:25:29.952660  720494 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 19:25:29.952801  720494 start.go:340] cluster config:
	{Name:addons-244316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:25:29.957445  720494 out.go:177] * Starting "addons-244316" primary control-plane node in "addons-244316" cluster
	I0920 19:25:29.960190  720494 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 19:25:29.963127  720494 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:25:29.965720  720494 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 19:25:29.965816  720494 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:29.965854  720494 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0920 19:25:29.965880  720494 cache.go:56] Caching tarball of preloaded images
	I0920 19:25:29.965965  720494 preload.go:172] Found /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0920 19:25:29.965980  720494 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 19:25:29.966344  720494 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/config.json ...
	I0920 19:25:29.966373  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/config.json: {Name:mk6955f082c6754495d7aaba1d3a3077fbb595bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:29.982114  720494 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:25:29.982227  720494 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:25:29.982252  720494 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 19:25:29.982261  720494 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 19:25:29.982269  720494 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 19:25:29.982275  720494 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 19:25:47.951558  720494 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 19:25:47.951596  720494 cache.go:194] Successfully downloaded all kic artifacts
	I0920 19:25:47.951647  720494 start.go:360] acquireMachinesLock for addons-244316: {Name:mk0522c0afca04ad0b8b7308c1947c33a5b75632 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:25:47.951772  720494 start.go:364] duration metric: took 100.896µs to acquireMachinesLock for "addons-244316"
	I0920 19:25:47.951805  720494 start.go:93] Provisioning new machine with config: &{Name:addons-244316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:25:47.951885  720494 start.go:125] createHost starting for "" (driver="docker")
	I0920 19:25:47.953438  720494 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 19:25:47.953715  720494 start.go:159] libmachine.API.Create for "addons-244316" (driver="docker")
	I0920 19:25:47.953752  720494 client.go:168] LocalClient.Create starting
	I0920 19:25:47.953877  720494 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem
	I0920 19:25:48.990003  720494 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem
	I0920 19:25:49.508284  720494 cli_runner.go:164] Run: docker network inspect addons-244316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 19:25:49.524511  720494 cli_runner.go:211] docker network inspect addons-244316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 19:25:49.524596  720494 network_create.go:284] running [docker network inspect addons-244316] to gather additional debugging logs...
	I0920 19:25:49.524617  720494 cli_runner.go:164] Run: docker network inspect addons-244316
	W0920 19:25:49.541131  720494 cli_runner.go:211] docker network inspect addons-244316 returned with exit code 1
	I0920 19:25:49.541164  720494 network_create.go:287] error running [docker network inspect addons-244316]: docker network inspect addons-244316: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-244316 not found
	I0920 19:25:49.541203  720494 network_create.go:289] output of [docker network inspect addons-244316]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-244316 not found
	
	** /stderr **
	I0920 19:25:49.541314  720494 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:25:49.555672  720494 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004c8400}
	I0920 19:25:49.555719  720494 network_create.go:124] attempt to create docker network addons-244316 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 19:25:49.555776  720494 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-244316 addons-244316
	I0920 19:25:49.624569  720494 network_create.go:108] docker network addons-244316 192.168.49.0/24 created
	I0920 19:25:49.624607  720494 kic.go:121] calculated static IP "192.168.49.2" for the "addons-244316" container
	I0920 19:25:49.624710  720494 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 19:25:49.638281  720494 cli_runner.go:164] Run: docker volume create addons-244316 --label name.minikube.sigs.k8s.io=addons-244316 --label created_by.minikube.sigs.k8s.io=true
	I0920 19:25:49.656152  720494 oci.go:103] Successfully created a docker volume addons-244316
	I0920 19:25:49.656249  720494 cli_runner.go:164] Run: docker run --rm --name addons-244316-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-244316 --entrypoint /usr/bin/test -v addons-244316:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 19:25:50.909463  720494 cli_runner.go:217] Completed: docker run --rm --name addons-244316-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-244316 --entrypoint /usr/bin/test -v addons-244316:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (1.253168687s)
	I0920 19:25:50.909494  720494 oci.go:107] Successfully prepared a docker volume addons-244316
	I0920 19:25:50.909519  720494 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:50.909540  720494 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 19:25:50.909613  720494 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-244316:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 19:25:55.044839  720494 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-244316:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.135176316s)
	I0920 19:25:55.044877  720494 kic.go:203] duration metric: took 4.135334236s to extract preloaded images to volume ...
	W0920 19:25:55.045079  720494 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 19:25:55.045238  720494 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 19:25:55.111173  720494 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-244316 --name addons-244316 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-244316 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-244316 --network addons-244316 --ip 192.168.49.2 --volume addons-244316:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 19:25:55.497137  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Running}}
	I0920 19:25:55.513667  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:25:55.540765  720494 cli_runner.go:164] Run: docker exec addons-244316 stat /var/lib/dpkg/alternatives/iptables
	I0920 19:25:55.618535  720494 oci.go:144] the created container "addons-244316" has a running status.
	I0920 19:25:55.618561  720494 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa...
	I0920 19:25:55.937892  720494 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 19:25:55.968552  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:25:56.000829  720494 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 19:25:56.000849  720494 kic_runner.go:114] Args: [docker exec --privileged addons-244316 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 19:25:56.069963  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:25:56.090448  720494 machine.go:93] provisionDockerMachine start ...
	I0920 19:25:56.090542  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:56.110367  720494 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:56.110637  720494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:25:56.110647  720494 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:25:56.305168  720494 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-244316
	
	I0920 19:25:56.305282  720494 ubuntu.go:169] provisioning hostname "addons-244316"
	I0920 19:25:56.305398  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:56.346342  720494 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:56.346689  720494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:25:56.346713  720494 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-244316 && echo "addons-244316" | sudo tee /etc/hostname
	I0920 19:25:56.522053  720494 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-244316
	
	I0920 19:25:56.522136  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:56.542986  720494 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:56.543222  720494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:25:56.543240  720494 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-244316' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-244316/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-244316' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:25:56.688866  720494 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:25:56.688893  720494 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-712952/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-712952/.minikube}
	I0920 19:25:56.688933  720494 ubuntu.go:177] setting up certificates
	I0920 19:25:56.688948  720494 provision.go:84] configureAuth start
	I0920 19:25:56.689025  720494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-244316
	I0920 19:25:56.706017  720494 provision.go:143] copyHostCerts
	I0920 19:25:56.706108  720494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem (1082 bytes)
	I0920 19:25:56.706235  720494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem (1123 bytes)
	I0920 19:25:56.706299  720494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem (1675 bytes)
	I0920 19:25:56.706352  720494 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem org=jenkins.addons-244316 san=[127.0.0.1 192.168.49.2 addons-244316 localhost minikube]
	I0920 19:25:57.019466  720494 provision.go:177] copyRemoteCerts
	I0920 19:25:57.019547  720494 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:25:57.019592  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.036410  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.138382  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 19:25:57.166107  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 19:25:57.190820  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 19:25:57.215294  720494 provision.go:87] duration metric: took 526.319417ms to configureAuth
	I0920 19:25:57.215365  720494 ubuntu.go:193] setting minikube options for container-runtime
	I0920 19:25:57.215581  720494 config.go:182] Loaded profile config "addons-244316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:57.215698  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.232471  720494 main.go:141] libmachine: Using SSH client type: native
	I0920 19:25:57.232769  720494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 19:25:57.232792  720494 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:25:57.476783  720494 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:25:57.476851  720494 machine.go:96] duration metric: took 1.386383012s to provisionDockerMachine
	I0920 19:25:57.476877  720494 client.go:171] duration metric: took 9.523113336s to LocalClient.Create
	I0920 19:25:57.476912  720494 start.go:167] duration metric: took 9.523196543s to libmachine.API.Create "addons-244316"
	I0920 19:25:57.476938  720494 start.go:293] postStartSetup for "addons-244316" (driver="docker")
	I0920 19:25:57.476964  720494 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:25:57.477048  720494 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:25:57.477143  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.493786  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.597872  720494 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:25:57.601104  720494 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 19:25:57.601148  720494 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 19:25:57.601160  720494 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 19:25:57.601168  720494 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 19:25:57.601178  720494 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-712952/.minikube/addons for local assets ...
	I0920 19:25:57.601253  720494 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-712952/.minikube/files for local assets ...
	I0920 19:25:57.601279  720494 start.go:296] duration metric: took 124.321395ms for postStartSetup
	I0920 19:25:57.601598  720494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-244316
	I0920 19:25:57.617888  720494 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/config.json ...
	I0920 19:25:57.618195  720494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:25:57.618252  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.634402  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.737760  720494 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 19:25:57.742480  720494 start.go:128] duration metric: took 9.790578414s to createHost
	I0920 19:25:57.742508  720494 start.go:83] releasing machines lock for "addons-244316", held for 9.790720023s
	I0920 19:25:57.742594  720494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-244316
	I0920 19:25:57.760148  720494 ssh_runner.go:195] Run: cat /version.json
	I0920 19:25:57.760203  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.760211  720494 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:25:57.760279  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:25:57.783475  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.784627  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:25:57.880299  720494 ssh_runner.go:195] Run: systemctl --version
	I0920 19:25:58.009150  720494 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:25:58.154311  720494 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 19:25:58.159113  720494 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:25:58.179821  720494 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0920 19:25:58.179900  720494 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:25:58.211641  720494 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 19:25:58.211670  720494 start.go:495] detecting cgroup driver to use...
	I0920 19:25:58.211707  720494 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 19:25:58.211764  720494 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:25:58.227213  720494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:25:58.239238  720494 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:25:58.239307  720494 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:25:58.254293  720494 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:25:58.268754  720494 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:25:58.352765  720494 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:25:58.452761  720494 docker.go:233] disabling docker service ...
	I0920 19:25:58.452850  720494 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:25:58.472668  720494 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:25:58.485779  720494 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:25:58.573268  720494 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:25:58.666873  720494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:25:58.679533  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:25:58.698607  720494 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:25:58.698720  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.709753  720494 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:25:58.709850  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.721514  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.732020  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.743803  720494 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:25:58.754937  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.765725  720494 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.784156  720494 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:25:58.795101  720494 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:25:58.804507  720494 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:25:58.814571  720494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:25:58.906740  720494 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:25:59.036841  720494 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:25:59.037033  720494 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:25:59.041940  720494 start.go:563] Will wait 60s for crictl version
	I0920 19:25:59.042029  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:25:59.046343  720494 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:25:59.091228  720494 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0920 19:25:59.091392  720494 ssh_runner.go:195] Run: crio --version
	I0920 19:25:59.133146  720494 ssh_runner.go:195] Run: crio --version
	I0920 19:25:59.173996  720494 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0920 19:25:59.175094  720494 cli_runner.go:164] Run: docker network inspect addons-244316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:25:59.194435  720494 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 19:25:59.198004  720494 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:25:59.208671  720494 kubeadm.go:883] updating cluster {Name:addons-244316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:25:59.208837  720494 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:59.208896  720494 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:25:59.284925  720494 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:25:59.284952  720494 crio.go:433] Images already preloaded, skipping extraction
	I0920 19:25:59.285011  720494 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:25:59.326795  720494 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:25:59.326826  720494 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:25:59.326836  720494 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0920 19:25:59.326938  720494 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-244316 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:25:59.327033  720494 ssh_runner.go:195] Run: crio config
	I0920 19:25:59.400041  720494 cni.go:84] Creating CNI manager for ""
	I0920 19:25:59.400067  720494 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 19:25:59.400078  720494 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:25:59.400123  720494 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-244316 NodeName:addons-244316 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:25:59.400318  720494 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-244316"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:25:59.400413  720494 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:25:59.409466  720494 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:25:59.409543  720494 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:25:59.418255  720494 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0920 19:25:59.436798  720494 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:25:59.454812  720494 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0920 19:25:59.472784  720494 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 19:25:59.476021  720494 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:25:59.487107  720494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:25:59.575326  720494 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:25:59.590207  720494 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316 for IP: 192.168.49.2
	I0920 19:25:59.590230  720494 certs.go:194] generating shared ca certs ...
	I0920 19:25:59.590247  720494 certs.go:226] acquiring lock for ca certs: {Name:mk7d5a5d7b3ae5cfc59d92978e91627e15e3360b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:25:59.590385  720494 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key
	I0920 19:26:01.128707  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt ...
	I0920 19:26:01.128744  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt: {Name:mk1e04770eebce03242f88886403fc8aaa4cfe20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:01.129575  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key ...
	I0920 19:26:01.129604  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key: {Name:mka1be98ed1f78200fab01b6e2e3e6b22c64df46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:01.130163  720494 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key
	I0920 19:26:01.605890  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt ...
	I0920 19:26:01.605926  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt: {Name:mk03b39bb6b8251d65137612cf5e860b85386060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:01.606164  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key ...
	I0920 19:26:01.606193  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key: {Name:mk84b5b286008c7b39f1846c3a68b7450ec1aa33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:01.606319  720494 certs.go:256] generating profile certs ...
	I0920 19:26:01.606400  720494 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.key
	I0920 19:26:01.606424  720494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt with IP's: []
	I0920 19:26:02.051551  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt ...
	I0920 19:26:02.051591  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: {Name:mk4ce0de29683e22275174265e154c929722a947 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.051776  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.key ...
	I0920 19:26:02.051790  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.key: {Name:mk93067dbaede2ab18fb6ecd46883d29e619fb22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.051868  720494 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key.37f1b239
	I0920 19:26:02.051891  720494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt.37f1b239 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 19:26:02.516359  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt.37f1b239 ...
	I0920 19:26:02.516396  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt.37f1b239: {Name:mk04066709546d402e3fb86d226ae85095f6ecbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.516605  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key.37f1b239 ...
	I0920 19:26:02.516620  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key.37f1b239: {Name:mkf95597b5bfdb7c10c9fa46a41da8ae82c6dd73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.516735  720494 certs.go:381] copying /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt.37f1b239 -> /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt
	I0920 19:26:02.516829  720494 certs.go:385] copying /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key.37f1b239 -> /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key
	I0920 19:26:02.516886  720494 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.key
	I0920 19:26:02.516908  720494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.crt with IP's: []
	I0920 19:26:02.897643  720494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.crt ...
	I0920 19:26:02.897677  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.crt: {Name:mk09dc4a7bfb678ac6c7e5b6b5d0beeda1b27aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.897877  720494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.key ...
	I0920 19:26:02.897893  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.key: {Name:mkdfdda2c3f5759ba75abfb95a8a24312a55704c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:02.898086  720494 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 19:26:02.898132  720494 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem (1082 bytes)
	I0920 19:26:02.898162  720494 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:26:02.898190  720494 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem (1675 bytes)
	I0920 19:26:02.898795  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:26:02.926518  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 19:26:02.955404  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:26:02.983641  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:26:03.014867  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 19:26:03.046742  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:26:03.076519  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:26:03.109906  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:26:03.141479  720494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:26:03.168462  720494 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:26:03.189520  720494 ssh_runner.go:195] Run: openssl version
	I0920 19:26:03.195282  720494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:26:03.206954  720494 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:26:03.211167  720494 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:26:03.211239  720494 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:26:03.218399  720494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:26:03.227694  720494 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:26:03.230917  720494 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 19:26:03.230978  720494 kubeadm.go:392] StartCluster: {Name:addons-244316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-244316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:26:03.231066  720494 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:26:03.231129  720494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:26:03.270074  720494 cri.go:89] found id: ""
	I0920 19:26:03.270153  720494 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:26:03.280624  720494 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:26:03.291274  720494 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 19:26:03.291459  720494 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:26:03.302610  720494 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:26:03.302644  720494 kubeadm.go:157] found existing configuration files:
	
	I0920 19:26:03.302713  720494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:26:03.313478  720494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:26:03.313591  720494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:26:03.323025  720494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:26:03.332499  720494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:26:03.332592  720494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:26:03.341716  720494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:26:03.351516  720494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:26:03.351613  720494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:26:03.362846  720494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:26:03.376977  720494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:26:03.377091  720494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:26:03.387441  720494 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 19:26:03.434194  720494 kubeadm.go:310] W0920 19:26:03.433484    1189 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:26:03.436312  720494 kubeadm.go:310] W0920 19:26:03.435724    1189 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:26:03.478542  720494 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0920 19:26:03.547044  720494 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:26:21.145323  720494 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:26:21.145408  720494 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:26:21.145508  720494 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 19:26:21.145578  720494 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 19:26:21.145618  720494 kubeadm.go:310] OS: Linux
	I0920 19:26:21.145685  720494 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 19:26:21.145790  720494 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 19:26:21.145851  720494 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 19:26:21.145900  720494 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 19:26:21.145958  720494 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 19:26:21.146008  720494 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 19:26:21.146053  720494 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 19:26:21.146100  720494 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 19:26:21.146148  720494 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 19:26:21.146220  720494 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:26:21.146332  720494 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:26:21.146434  720494 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:26:21.146500  720494 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:26:21.148270  720494 out.go:235]   - Generating certificates and keys ...
	I0920 19:26:21.148377  720494 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:26:21.148446  720494 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:26:21.148515  720494 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 19:26:21.148586  720494 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 19:26:21.148656  720494 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 19:26:21.148741  720494 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 19:26:21.148806  720494 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 19:26:21.148925  720494 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-244316 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 19:26:21.148982  720494 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 19:26:21.149096  720494 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-244316 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 19:26:21.149163  720494 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 19:26:21.149230  720494 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 19:26:21.149278  720494 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 19:26:21.149337  720494 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:26:21.149392  720494 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:26:21.149453  720494 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:26:21.149507  720494 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:26:21.149572  720494 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:26:21.149629  720494 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:26:21.149710  720494 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:26:21.149782  720494 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:26:21.150997  720494 out.go:235]   - Booting up control plane ...
	I0920 19:26:21.151103  720494 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:26:21.151182  720494 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:26:21.151253  720494 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:26:21.151362  720494 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:26:21.151450  720494 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:26:21.151493  720494 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:26:21.151625  720494 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:26:21.151731  720494 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:26:21.151792  720494 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.002195614s
	I0920 19:26:21.151866  720494 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:26:21.151928  720494 kubeadm.go:310] [api-check] The API server is healthy after 5.502091486s
	I0920 19:26:21.152036  720494 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:26:21.152163  720494 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:26:21.152225  720494 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:26:21.152406  720494 kubeadm.go:310] [mark-control-plane] Marking the node addons-244316 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:26:21.152465  720494 kubeadm.go:310] [bootstrap-token] Using token: z8az5e.wrm7la03ugzjp7n2
	I0920 19:26:21.154261  720494 out.go:235]   - Configuring RBAC rules ...
	I0920 19:26:21.154478  720494 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:26:21.154586  720494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:26:21.154732  720494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:26:21.154909  720494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:26:21.155048  720494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:26:21.155175  720494 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:26:21.155311  720494 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:26:21.155368  720494 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:26:21.155442  720494 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:26:21.155457  720494 kubeadm.go:310] 
	I0920 19:26:21.155528  720494 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:26:21.155538  720494 kubeadm.go:310] 
	I0920 19:26:21.155614  720494 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:26:21.155625  720494 kubeadm.go:310] 
	I0920 19:26:21.155651  720494 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:26:21.155712  720494 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:26:21.155767  720494 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:26:21.155774  720494 kubeadm.go:310] 
	I0920 19:26:21.155828  720494 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:26:21.155837  720494 kubeadm.go:310] 
	I0920 19:26:21.155891  720494 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:26:21.155899  720494 kubeadm.go:310] 
	I0920 19:26:21.155953  720494 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:26:21.156030  720494 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:26:21.156101  720494 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:26:21.156108  720494 kubeadm.go:310] 
	I0920 19:26:21.156190  720494 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:26:21.156274  720494 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:26:21.156280  720494 kubeadm.go:310] 
	I0920 19:26:21.156362  720494 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token z8az5e.wrm7la03ugzjp7n2 \
	I0920 19:26:21.156468  720494 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9dcbae36a1cb65f9099573ad9fac7ebc036c2eab288a010b4e8645c68ec99bdd \
	I0920 19:26:21.156491  720494 kubeadm.go:310] 	--control-plane 
	I0920 19:26:21.156500  720494 kubeadm.go:310] 
	I0920 19:26:21.156585  720494 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:26:21.156594  720494 kubeadm.go:310] 
	I0920 19:26:21.156675  720494 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token z8az5e.wrm7la03ugzjp7n2 \
	I0920 19:26:21.156882  720494 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9dcbae36a1cb65f9099573ad9fac7ebc036c2eab288a010b4e8645c68ec99bdd 
	I0920 19:26:21.156920  720494 cni.go:84] Creating CNI manager for ""
	I0920 19:26:21.156929  720494 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 19:26:21.158769  720494 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 19:26:21.160058  720494 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 19:26:21.164256  720494 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 19:26:21.164293  720494 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 19:26:21.182687  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 19:26:21.476771  720494 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:26:21.476873  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:21.476920  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-244316 minikube.k8s.io/updated_at=2024_09_20T19_26_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-244316 minikube.k8s.io/primary=true
	I0920 19:26:21.502145  720494 ops.go:34] apiserver oom_adj: -16
	I0920 19:26:21.606164  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:22.106239  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:22.606963  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:23.106410  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:23.607082  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:24.106927  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:24.606481  720494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:26:24.731028  720494 kubeadm.go:1113] duration metric: took 3.254235742s to wait for elevateKubeSystemPrivileges
	I0920 19:26:24.731059  720494 kubeadm.go:394] duration metric: took 21.500084875s to StartCluster
	I0920 19:26:24.731077  720494 settings.go:142] acquiring lock: {Name:mk4ddd924228bcf0d3a34d801111d62307b61b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:24.731199  720494 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:26:24.731573  720494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/kubeconfig: {Name:mk7d8753aacb2df257bd5191c7b120c25eed71dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:26:24.732243  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 19:26:24.732578  720494 config.go:182] Loaded profile config "addons-244316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:26:24.732726  720494 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 19:26:24.732818  720494 addons.go:69] Setting yakd=true in profile "addons-244316"
	I0920 19:26:24.732834  720494 addons.go:234] Setting addon yakd=true in "addons-244316"
	I0920 19:26:24.732858  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.733357  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.733547  720494 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:26:24.733884  720494 addons.go:69] Setting cloud-spanner=true in profile "addons-244316"
	I0920 19:26:24.733908  720494 addons.go:234] Setting addon cloud-spanner=true in "addons-244316"
	I0920 19:26:24.733933  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.734414  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.734698  720494 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-244316"
	I0920 19:26:24.734731  720494 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-244316"
	I0920 19:26:24.734761  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.735207  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.737708  720494 addons.go:69] Setting registry=true in profile "addons-244316"
	I0920 19:26:24.738394  720494 addons.go:234] Setting addon registry=true in "addons-244316"
	I0920 19:26:24.738478  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.738988  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.743232  720494 addons.go:69] Setting storage-provisioner=true in profile "addons-244316"
	I0920 19:26:24.743320  720494 addons.go:234] Setting addon storage-provisioner=true in "addons-244316"
	I0920 19:26:24.743377  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.743891  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.748317  720494 addons.go:69] Setting default-storageclass=true in profile "addons-244316"
	I0920 19:26:24.748414  720494 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-244316"
	I0920 19:26:24.748919  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.756797  720494 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-244316"
	I0920 19:26:24.756882  720494 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-244316"
	I0920 19:26:24.757259  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.762988  720494 addons.go:69] Setting gcp-auth=true in profile "addons-244316"
	I0920 19:26:24.763039  720494 mustload.go:65] Loading cluster: addons-244316
	I0920 19:26:24.763255  720494 config.go:182] Loaded profile config "addons-244316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:26:24.763519  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.763925  720494 addons.go:69] Setting ingress=true in profile "addons-244316"
	I0920 19:26:24.763954  720494 addons.go:234] Setting addon ingress=true in "addons-244316"
	I0920 19:26:24.763999  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.764434  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.765037  720494 addons.go:69] Setting volcano=true in profile "addons-244316"
	I0920 19:26:24.765061  720494 addons.go:234] Setting addon volcano=true in "addons-244316"
	I0920 19:26:24.765092  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.765521  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.776878  720494 addons.go:69] Setting ingress-dns=true in profile "addons-244316"
	I0920 19:26:24.776920  720494 addons.go:234] Setting addon ingress-dns=true in "addons-244316"
	I0920 19:26:24.776986  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.777889  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.784771  720494 addons.go:69] Setting volumesnapshots=true in profile "addons-244316"
	I0920 19:26:24.784812  720494 addons.go:234] Setting addon volumesnapshots=true in "addons-244316"
	I0920 19:26:24.784851  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.785348  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.791077  720494 addons.go:69] Setting inspektor-gadget=true in profile "addons-244316"
	I0920 19:26:24.791114  720494 addons.go:234] Setting addon inspektor-gadget=true in "addons-244316"
	I0920 19:26:24.791157  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.791640  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.804082  720494 addons.go:69] Setting metrics-server=true in profile "addons-244316"
	I0920 19:26:24.804161  720494 out.go:177] * Verifying Kubernetes components...
	I0920 19:26:24.811690  720494 addons.go:234] Setting addon metrics-server=true in "addons-244316"
	I0920 19:26:24.811764  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.812275  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.738362  720494 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-244316"
	I0920 19:26:24.829194  720494 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-244316"
	I0920 19:26:24.829235  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.829723  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.850636  720494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:26:24.850683  720494 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 19:26:24.876752  720494 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 19:26:24.886705  720494 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 19:26:24.893510  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.895810  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 19:26:24.895828  720494 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 19:26:24.895890  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.904782  720494 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 19:26:24.906634  720494 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 19:26:24.911282  720494 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 19:26:24.911362  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 19:26:24.911513  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.914963  720494 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 19:26:24.915046  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 19:26:24.915159  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.920961  720494 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 19:26:24.921038  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 19:26:24.921135  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.958752  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 19:26:24.960237  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 19:26:24.960307  720494 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 19:26:24.964077  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:24.967158  720494 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 19:26:24.968275  720494 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-244316"
	I0920 19:26:24.968317  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:24.971597  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:24.988157  720494 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 19:26:24.988245  720494 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 19:26:24.988355  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.008361  720494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:26:25.008545  720494 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 19:26:25.021818  720494 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:26:25.027173  720494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 19:26:25.028992  720494 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 19:26:25.029020  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 19:26:25.029087  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.030454  720494 addons.go:234] Setting addon default-storageclass=true in "addons-244316"
	I0920 19:26:25.030503  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:25.030960  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:25.049224  720494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:26:25.057818  720494 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:26:25.057891  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:26:25.057977  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.069805  720494 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 19:26:25.069834  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 19:26:25.069903  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.073303  720494 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 19:26:25.074477  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 19:26:25.105627  720494 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:26:25.105745  720494 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:26:25.105935  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.122314  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.123447  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 19:26:25.124754  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W0920 19:26:25.125412  720494 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 19:26:25.130076  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 19:26:25.133405  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 19:26:25.135850  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 19:26:25.142193  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 19:26:25.143570  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 19:26:25.145106  720494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 19:26:25.146269  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 19:26:25.146298  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 19:26:25.146395  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.191104  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.212528  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.248811  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.265661  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.284008  720494 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:26:25.284030  720494 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:26:25.284093  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.287036  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.299833  720494 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 19:26:25.305881  720494 out.go:177]   - Using image docker.io/busybox:stable
	I0920 19:26:25.312108  720494 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 19:26:25.312162  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 19:26:25.312243  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:25.316161  720494 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:26:25.348052  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.368019  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.368977  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.369628  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.379009  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.401631  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.411379  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:25.628116  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 19:26:25.691608  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 19:26:25.691689  720494 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 19:26:25.754633  720494 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 19:26:25.754725  720494 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 19:26:25.783343  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 19:26:25.807311  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:26:25.811289  720494 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 19:26:25.811364  720494 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 19:26:25.814757  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 19:26:25.814831  720494 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 19:26:25.822773  720494 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:26:25.822846  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 19:26:25.847634  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:26:25.850739  720494 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 19:26:25.850817  720494 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 19:26:25.871877  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 19:26:25.874400  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 19:26:25.904843  720494 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 19:26:25.904924  720494 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 19:26:25.931038  720494 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:26:25.931145  720494 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:26:25.937014  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 19:26:25.937091  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 19:26:25.946447  720494 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 19:26:25.946510  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 19:26:25.952435  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 19:26:26.003047  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 19:26:26.003132  720494 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 19:26:26.055471  720494 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 19:26:26.055563  720494 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 19:26:26.058191  720494 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 19:26:26.058277  720494 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 19:26:26.108523  720494 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:26:26.108607  720494 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:26:26.120555  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 19:26:26.120639  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 19:26:26.143676  720494 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 19:26:26.143751  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 19:26:26.159321  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 19:26:26.223265  720494 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 19:26:26.223347  720494 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 19:26:26.239958  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:26:26.290920  720494 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 19:26:26.291005  720494 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 19:26:26.309158  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 19:26:26.335256  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 19:26:26.335338  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 19:26:26.351619  720494 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 19:26:26.351703  720494 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 19:26:26.442556  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 19:26:26.442634  720494 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 19:26:26.510624  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 19:26:26.510716  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 19:26:26.521274  720494 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 19:26:26.521400  720494 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 19:26:26.561917  720494 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:26:26.562042  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 19:26:26.611747  720494 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 19:26:26.611825  720494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 19:26:26.612177  720494 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 19:26:26.612229  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 19:26:26.627234  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:26:26.678176  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 19:26:26.678252  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 19:26:26.694345  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 19:26:26.778377  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 19:26:26.778461  720494 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 19:26:26.948977  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 19:26:26.949051  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 19:26:27.077619  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 19:26:27.077706  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 19:26:27.165214  720494 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 19:26:27.165330  720494 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 19:26:27.323372  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 19:26:28.781000  720494 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.657515938s)
	I0920 19:26:28.781029  720494 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 19:26:28.782336  720494 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.466143903s)
	I0920 19:26:28.783478  720494 node_ready.go:35] waiting up to 6m0s for node "addons-244316" to be "Ready" ...
	I0920 19:26:28.800679  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.172462658s)
	I0920 19:26:29.525301  720494 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-244316" context rescaled to 1 replicas
	I0920 19:26:30.797646  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:31.267609  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.460211174s)
	I0920 19:26:31.267696  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.419987423s)
	I0920 19:26:31.267729  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.395787758s)
	I0920 19:26:31.267762  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.393293006s)
	I0920 19:26:31.267799  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.315298297s)
	I0920 19:26:31.267824  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.108422582s)
	I0920 19:26:31.268250  720494 addons.go:475] Verifying addon registry=true in "addons-244316"
	I0920 19:26:31.268429  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.484989303s)
	I0920 19:26:31.268458  720494 addons.go:475] Verifying addon ingress=true in "addons-244316"
	I0920 19:26:31.267881  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.02785039s)
	I0920 19:26:31.268830  720494 addons.go:475] Verifying addon metrics-server=true in "addons-244316"
	I0920 19:26:31.267910  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.958677932s)
	I0920 19:26:31.271284  720494 out.go:177] * Verifying registry addon...
	I0920 19:26:31.271359  720494 out.go:177] * Verifying ingress addon...
	I0920 19:26:31.272983  720494 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-244316 service yakd-dashboard -n yakd-dashboard
	
	I0920 19:26:31.275860  720494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 19:26:31.277001  720494 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 19:26:31.316121  720494 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 19:26:31.316161  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:31.317362  720494 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 19:26:31.317387  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0920 19:26:31.351356  720494 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 19:26:31.428421  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.80109277s)
	W0920 19:26:31.428552  720494 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 19:26:31.428610  720494 retry.go:31] will retry after 262.995193ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 19:26:31.428729  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.734302903s)
	I0920 19:26:31.692832  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:26:31.785593  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:31.787168  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:31.807816  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.48433629s)
	I0920 19:26:31.807856  720494 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-244316"
	I0920 19:26:31.812633  720494 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 19:26:31.816455  720494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 19:26:31.827502  720494 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 19:26:31.827532  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:32.319083  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:32.338243  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:32.343594  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:32.780997  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:32.782488  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:32.821766  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:33.292797  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:33.293772  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:33.294669  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:33.320656  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:33.550663  720494 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 19:26:33.550800  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:33.572839  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:33.737603  720494 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 19:26:33.782544  720494 addons.go:234] Setting addon gcp-auth=true in "addons-244316"
	I0920 19:26:33.782601  720494 host.go:66] Checking if "addons-244316" exists ...
	I0920 19:26:33.783145  720494 cli_runner.go:164] Run: docker container inspect addons-244316 --format={{.State.Status}}
	I0920 19:26:33.786655  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:33.788546  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:33.798699  720494 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 19:26:33.798751  720494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-244316
	I0920 19:26:33.821497  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:33.823451  720494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/addons-244316/id_rsa Username:docker}
	I0920 19:26:34.284102  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:34.289618  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:34.323390  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:34.779640  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:34.781007  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:34.820473  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:34.992350  720494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.299433758s)
	I0920 19:26:34.992431  720494 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.193714155s)
	I0920 19:26:34.995444  720494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:26:34.997869  720494 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 19:26:35.001709  720494 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 19:26:35.001756  720494 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 19:26:35.035728  720494 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 19:26:35.035759  720494 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 19:26:35.079944  720494 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 19:26:35.079979  720494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 19:26:35.102984  720494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 19:26:35.294596  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:35.295376  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:35.296628  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:35.323531  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:35.758392  720494 addons.go:475] Verifying addon gcp-auth=true in "addons-244316"
	I0920 19:26:35.761764  720494 out.go:177] * Verifying gcp-auth addon...
	I0920 19:26:35.765377  720494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 19:26:35.775895  720494 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 19:26:35.775929  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:35.783065  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:35.788830  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:35.820810  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:36.269858  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:36.279954  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:36.283043  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:36.320621  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:36.768866  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:36.779447  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:36.781676  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:36.820993  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:37.269034  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:37.282448  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:37.285567  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:37.321631  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:37.773878  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:37.779965  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:37.784905  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:37.788061  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:37.822000  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:38.269379  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:38.281874  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:38.282763  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:38.320741  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:38.769226  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:38.780873  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:38.782249  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:38.821403  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:39.269763  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:39.282689  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:39.283733  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:39.319865  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:39.770281  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:39.780666  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:39.781505  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:39.819986  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:40.269726  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:40.284516  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:40.288013  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:40.289324  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:40.321134  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:40.768854  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:40.781445  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:40.782576  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:40.820776  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:41.270401  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:41.282485  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:41.286141  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:41.320497  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:41.769918  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:41.781588  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:41.781944  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:41.820204  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:42.269713  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:42.283956  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:42.285628  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:42.289754  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:42.324052  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:42.768905  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:42.779837  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:42.781697  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:42.820416  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:43.269483  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:43.282444  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:43.289492  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:43.320876  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:43.769566  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:43.780213  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:43.781387  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:43.820582  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:44.268685  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:44.283172  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:44.284818  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:44.319863  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:44.768823  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:44.779384  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:44.780902  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:44.787478  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:44.820117  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:45.271913  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:45.292436  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:45.292763  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:45.321040  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:45.768511  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:45.780265  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:45.781678  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:45.819905  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:46.268442  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:46.281477  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:46.283481  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:46.321173  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:46.769096  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:46.780569  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:46.781603  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:46.820668  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:47.269903  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:47.283811  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:47.285562  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:47.288580  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:47.321196  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:47.769692  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:47.780998  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:47.781170  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:47.820425  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:48.269042  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:48.280340  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:48.282709  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:48.320815  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:48.775148  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:48.780544  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:48.780639  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:48.819936  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:49.268525  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:49.287431  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:49.289191  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:49.290872  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:49.319847  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:49.769384  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:49.779298  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:49.781188  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:49.820383  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:50.269301  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:50.282411  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:50.285240  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:50.321060  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:50.769473  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:50.779443  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:50.781486  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:50.820597  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:51.270112  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:51.282639  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:51.283088  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:51.320080  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:51.770106  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:51.780583  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:51.782027  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:51.787638  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:51.821886  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:52.268519  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:52.283176  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:52.284064  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:52.320872  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:52.769214  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:52.780683  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:52.781634  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:52.820510  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:53.268723  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:53.282200  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:53.283249  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:53.319884  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:53.769787  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:53.779902  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:53.781216  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:53.787956  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:53.820259  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:54.268727  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:54.284290  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:54.286675  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:54.320499  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:54.770197  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:54.780336  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:54.780886  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:54.872159  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:55.269901  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:55.283331  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:55.284928  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:55.322123  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:55.769377  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:55.786795  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:55.788318  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:55.792405  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:55.820273  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:56.269247  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:56.282020  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:56.282671  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:56.320941  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:56.768548  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:56.779663  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:56.781168  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:56.823458  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:57.270683  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:57.280849  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:57.289341  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:57.320313  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:57.770057  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:57.781886  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:57.782805  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:57.820734  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:58.269602  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:58.287535  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:58.289519  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:58.290728  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:26:58.320213  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:58.775347  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:58.780054  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:58.779009  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:58.820774  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:59.270294  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:59.286626  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:59.286677  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:59.320640  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:26:59.769280  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:59.778971  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:26:59.782087  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:26:59.820716  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:00.309244  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:00.318305  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:00.319591  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:00.339041  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:00.343407  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:00.769300  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:00.780065  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:00.781157  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:00.820504  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:01.269978  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:01.280960  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:01.281807  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:01.320569  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:01.770020  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:01.779716  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:01.780878  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:01.820495  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:02.268783  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:02.288170  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:02.289424  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:02.320500  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:02.769169  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:02.779328  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:02.780818  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:02.787295  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:02.820714  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:03.269333  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:03.282502  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:03.283193  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:03.320347  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:03.768910  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:03.779162  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:03.786884  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:03.820888  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:04.268561  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:04.282839  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:04.286144  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:04.319847  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:04.769755  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:04.779324  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:04.781769  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:04.787978  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:04.819936  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:05.269186  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:05.279197  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:05.282877  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:05.320475  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:05.768569  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:05.780966  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:05.781692  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:05.820438  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:06.268480  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:06.281239  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:06.282090  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:06.322308  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:06.769661  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:06.779803  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:06.781464  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:06.819852  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:07.268837  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:07.281876  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:07.284988  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:07.286363  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:07.320953  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:07.769044  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:07.779879  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:07.781635  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:07.820446  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:08.269819  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:08.279826  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:08.282143  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:08.320863  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:08.769566  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:08.780628  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:08.781408  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:08.820608  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:09.269486  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:09.282792  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:09.284787  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:09.288825  720494 node_ready.go:53] node "addons-244316" has status "Ready":"False"
	I0920 19:27:09.320872  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:09.771436  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:09.871052  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:09.871872  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:09.872778  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:10.268788  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:10.281335  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:10.282038  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:10.320121  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:10.790338  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:10.798021  720494 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 19:27:10.798094  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:10.802689  720494 node_ready.go:49] node "addons-244316" has status "Ready":"True"
	I0920 19:27:10.802757  720494 node_ready.go:38] duration metric: took 42.019246373s for node "addons-244316" to be "Ready" ...
	I0920 19:27:10.802790  720494 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:27:10.812816  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:10.826520  720494 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 19:27:10.826550  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:10.835433  720494 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-22l55" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.293900  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:11.305989  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:11.307104  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:11.332577  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:11.783357  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:11.784517  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:11.784931  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:11.849334  720494 pod_ready.go:93] pod "coredns-7c65d6cfc9-22l55" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.849418  720494 pod_ready.go:82] duration metric: took 1.013937392s for pod "coredns-7c65d6cfc9-22l55" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.849456  720494 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.868495  720494 pod_ready.go:93] pod "etcd-addons-244316" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.868569  720494 pod_ready.go:82] duration metric: took 19.076003ms for pod "etcd-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.868600  720494 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.875423  720494 pod_ready.go:93] pod "kube-apiserver-addons-244316" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.875560  720494 pod_ready.go:82] duration metric: took 6.929545ms for pod "kube-apiserver-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.875595  720494 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.879213  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:11.884216  720494 pod_ready.go:93] pod "kube-controller-manager-addons-244316" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.884288  720494 pod_ready.go:82] duration metric: took 8.628615ms for pod "kube-controller-manager-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.884318  720494 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2cdvm" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.988094  720494 pod_ready.go:93] pod "kube-proxy-2cdvm" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:11.988130  720494 pod_ready.go:82] duration metric: took 103.789214ms for pod "kube-proxy-2cdvm" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:11.988147  720494 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:12.269264  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:12.287208  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:12.289033  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:12.322571  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:12.388606  720494 pod_ready.go:93] pod "kube-scheduler-addons-244316" in "kube-system" namespace has status "Ready":"True"
	I0920 19:27:12.388638  720494 pod_ready.go:82] duration metric: took 400.478914ms for pod "kube-scheduler-addons-244316" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:12.388653  720494 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace to be "Ready" ...
	I0920 19:27:12.770087  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:12.781393  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:12.785622  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:12.822319  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:13.269603  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:13.296337  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:13.296766  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:13.322590  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:13.769847  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:13.779693  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:13.782433  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:13.822091  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:14.269252  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:14.280182  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:14.284723  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:14.322263  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:14.398349  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:14.770832  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:14.783054  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:14.784559  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:14.822909  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:15.270387  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:15.285696  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:15.290910  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:15.326172  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:15.770026  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:15.783567  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:15.785272  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:15.824485  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:16.270241  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:16.284794  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:16.285654  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:16.323741  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:16.770988  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:16.786341  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:16.788159  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:16.824214  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:16.898178  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:17.268906  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:17.285452  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:17.297778  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:17.323090  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:17.770096  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:17.783132  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:17.791255  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:17.822351  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:18.269424  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:18.280994  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:18.282707  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:18.321372  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:18.769666  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:18.781587  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:18.784235  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:18.822682  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:19.269470  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:19.283677  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:19.288629  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:19.321699  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:19.396588  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:19.772980  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:19.780803  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:19.782719  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:19.875556  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:20.269492  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:20.291670  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:20.292866  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:20.337167  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:20.773159  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:20.784988  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:20.787988  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:20.872046  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:21.269963  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:21.282783  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:21.286803  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:21.322202  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:21.405583  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:21.783199  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:21.783670  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:21.784876  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:21.821833  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:22.269088  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:22.284065  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:22.285339  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:22.321055  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:22.770100  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:22.781884  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:22.782968  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:22.823045  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:23.270418  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:23.303569  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:23.309281  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:23.339078  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:23.769506  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:23.783194  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:23.785917  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:23.822439  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:23.897984  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:24.269340  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:24.291455  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:24.292763  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:24.323361  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:24.769968  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:24.782088  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:24.783038  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:24.822751  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:25.272308  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:25.283430  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:25.283627  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:25.374953  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:25.769005  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:25.780915  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:25.781787  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:25.823930  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:25.903531  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:26.269834  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:26.282530  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:26.283167  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:26.322054  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:26.772379  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:26.782423  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:26.783631  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:26.825085  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:27.269779  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:27.284418  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:27.284806  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:27.338146  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:27.769342  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:27.780314  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:27.781854  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:27.821476  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:28.269286  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:28.286594  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:28.287661  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:28.321326  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:28.396550  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:28.769726  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:28.786411  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:28.789226  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:28.823669  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:29.273075  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:29.294479  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:29.294798  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:29.321458  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:29.783611  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:29.801323  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:29.802103  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:29.822898  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:30.270237  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:30.281437  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:30.288878  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:30.323004  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:30.397272  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:30.770019  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:30.781553  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:30.784063  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:30.821814  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:31.274829  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:31.376343  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:31.376669  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:31.377851  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:31.770717  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:31.872819  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:31.874407  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:31.874951  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:32.269835  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:32.289378  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:32.296761  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:32.336615  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:32.770473  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:32.780243  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:32.783111  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:32.822024  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:32.895154  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:33.269100  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:33.285151  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:33.286306  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:33.321947  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:33.769592  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:33.785588  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:33.787506  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:33.823169  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:34.270017  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:34.296394  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:34.298155  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:34.323308  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:34.771050  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:34.779942  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:34.783226  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:34.823169  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:34.896141  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:35.271216  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:35.287615  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:35.287751  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:35.321772  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:35.769122  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:35.779720  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:35.783006  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:35.825488  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:36.271960  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:36.283707  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:36.285920  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:36.323409  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:36.769923  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:36.783227  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:36.784870  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:36.823594  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:36.898558  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:37.269755  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:37.293610  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:37.295883  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:37.324183  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:37.770650  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:37.787794  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:37.790021  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:37.825192  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:38.271469  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:38.286644  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:38.295621  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:38.372751  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:38.770626  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:38.783653  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:38.785086  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:38.828413  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:38.899046  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:39.269283  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:39.289657  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:39.290851  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:39.322100  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:39.769808  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:39.780111  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:39.782297  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:39.822102  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:40.269888  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:40.283081  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:40.289576  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:40.321540  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:40.771932  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:40.786292  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:40.787574  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:40.822514  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:40.902622  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:41.293096  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:41.293655  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:41.295135  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:41.383616  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:41.769623  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:41.780568  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:41.782685  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:41.821358  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:42.270092  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:42.283534  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:42.285074  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:42.323092  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:42.769472  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:42.783473  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:42.784385  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:42.821487  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:42.910866  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:43.269586  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:43.283137  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:43.284561  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:43.322062  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:43.770396  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:43.783706  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:43.785318  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:43.874138  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:44.270492  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:44.288382  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:44.289311  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:44.323291  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:44.772398  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:44.784708  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:44.789269  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:44.828934  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:45.270427  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:45.293926  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:45.297006  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:45.330624  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:45.395524  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:45.770375  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:45.780214  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:45.782897  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:45.821691  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:46.269691  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:46.287920  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:46.290547  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:46.321363  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:46.769449  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:46.780741  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:46.781790  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:46.821438  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:47.268967  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:47.283856  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:47.288484  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:47.321138  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:47.771747  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:47.782286  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:47.782867  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:47.821598  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:47.901855  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:48.270415  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:48.283534  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:48.293559  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:48.321254  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:48.769759  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:48.783475  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:48.784034  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:48.821245  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:49.269519  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:49.296283  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:49.297159  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:49.321855  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:49.769549  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:49.786083  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:49.787661  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:49.836279  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:49.905574  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:50.269880  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:50.284332  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:50.285274  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:50.326269  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:50.770583  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:50.784496  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:50.786832  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:50.821866  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:51.272774  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:51.288246  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:51.300589  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:51.328745  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:51.769682  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:51.784224  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:51.786399  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:51.822610  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:52.270010  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:52.284491  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:52.296634  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:52.321591  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:52.395168  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:52.769363  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:52.803871  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:52.804636  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:52.851054  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:53.269200  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:53.291143  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:53.292306  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:53.320768  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:53.769623  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:53.780255  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:53.781495  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:53.821099  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:54.270051  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:54.280279  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:54.286682  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:54.321233  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:54.397046  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:54.769210  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:54.781271  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:54.781800  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:54.821639  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:55.269499  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:55.283112  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:55.288430  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:55.321614  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:55.770291  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:55.780427  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:55.783414  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:55.821748  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:56.269112  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:56.297598  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:56.299043  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:56.322662  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:56.769391  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:56.782271  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:56.785981  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:56.822587  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:56.895669  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:57.269104  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:57.283331  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:57.285039  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:57.324318  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:57.770711  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:57.785695  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:57.786564  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:57.821295  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:58.270847  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:58.297070  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:58.299954  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:58.324589  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:58.770818  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:58.783118  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:58.784755  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:58.824340  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:58.898454  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:27:59.269608  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:59.288962  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:59.289893  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:59.327209  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:27:59.771301  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:27:59.779197  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:27:59.782085  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:27:59.821907  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:00.314086  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:28:00.315524  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:00.315879  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:00.377221  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:00.769699  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:00.782012  720494 kapi.go:107] duration metric: took 1m29.50615052s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 19:28:00.785641  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:00.821608  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:00.904903  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:01.273520  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:01.285242  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:01.322849  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:01.769313  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:01.783195  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:01.822914  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:02.274020  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:02.298141  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:02.326441  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:02.780076  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:02.785058  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:02.822665  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:03.268604  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:03.283597  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:03.321554  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:03.395718  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:03.768851  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:03.781855  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:03.823928  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:04.272216  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:04.283815  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:04.321512  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:04.769441  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:04.781911  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:04.821714  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:05.273285  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:05.286463  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:05.321734  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:05.400983  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:05.768739  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:05.782803  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:05.822520  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:06.271932  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:06.284198  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:06.322312  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:06.769439  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:06.781469  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:06.821798  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:07.269345  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:07.282286  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:07.321981  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:07.768935  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:07.782910  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:07.822376  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:07.899845  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:08.270993  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:28:08.281747  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:08.373020  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:08.769168  720494 kapi.go:107] duration metric: took 1m33.003789569s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 19:28:08.771038  720494 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-244316 cluster.
	I0920 19:28:08.772384  720494 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 19:28:08.773719  720494 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 19:28:08.781583  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:08.821332  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:09.282252  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:09.322029  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:09.783523  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:09.822921  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:09.902756  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:10.296308  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:10.322822  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:10.781762  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:10.822736  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:11.297609  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:11.321398  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:11.788233  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:11.824483  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:12.282873  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:12.322536  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:12.397997  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:12.782445  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:12.821032  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:13.288861  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:13.329878  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:13.781557  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:13.821498  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:14.290567  720494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:28:14.397119  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:14.401354  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:14.782289  720494 kapi.go:107] duration metric: took 1m43.505284147s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 19:28:14.821777  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:15.321877  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:15.834652  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:16.322429  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:16.822634  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:16.895178  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:17.323711  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:17.821392  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:18.326695  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:18.826947  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:18.895775  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:19.322832  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:19.825859  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:20.326263  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:20.825646  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:20.902662  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:21.322196  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:21.822435  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:22.322167  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:22.824989  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:23.322738  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:23.399445  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:23.822550  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:24.322519  720494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:28:24.824176  720494 kapi.go:107] duration metric: took 1m53.007723649s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 19:28:24.825669  720494 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0920 19:28:24.826761  720494 addons.go:510] duration metric: took 2m0.094026687s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0920 19:28:25.896052  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:27.896324  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:30.395200  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:32.895750  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:34.896053  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:37.396563  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:39.396837  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:41.895058  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:43.896042  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:45.907685  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:48.395599  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:50.895101  720494 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"False"
	I0920 19:28:51.396083  720494 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace has status "Ready":"True"
	I0920 19:28:51.396121  720494 pod_ready.go:82] duration metric: took 1m39.007452648s for pod "metrics-server-84c5f94fbc-zn5jl" in "kube-system" namespace to be "Ready" ...
	I0920 19:28:51.396140  720494 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-n79hn" in "kube-system" namespace to be "Ready" ...
	I0920 19:28:51.402155  720494 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-n79hn" in "kube-system" namespace has status "Ready":"True"
	I0920 19:28:51.402182  720494 pod_ready.go:82] duration metric: took 6.032492ms for pod "nvidia-device-plugin-daemonset-n79hn" in "kube-system" namespace to be "Ready" ...
	I0920 19:28:51.402206  720494 pod_ready.go:39] duration metric: took 1m40.599394134s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:28:51.402223  720494 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:28:51.402271  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:28:51.402336  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:28:51.456299  720494 cri.go:89] found id: "7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:28:51.456320  720494 cri.go:89] found id: ""
	I0920 19:28:51.456328  720494 logs.go:276] 1 containers: [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb]
	I0920 19:28:51.456393  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.460648  720494 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:28:51.460789  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:28:51.505091  720494 cri.go:89] found id: "a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:28:51.505116  720494 cri.go:89] found id: ""
	I0920 19:28:51.505128  720494 logs.go:276] 1 containers: [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56]
	I0920 19:28:51.505189  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.509129  720494 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:28:51.509207  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:28:51.562231  720494 cri.go:89] found id: "057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:28:51.562252  720494 cri.go:89] found id: ""
	I0920 19:28:51.562260  720494 logs.go:276] 1 containers: [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78]
	I0920 19:28:51.562319  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.566016  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:28:51.566137  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:28:51.603264  720494 cri.go:89] found id: "4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:28:51.603287  720494 cri.go:89] found id: ""
	I0920 19:28:51.603295  720494 logs.go:276] 1 containers: [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232]
	I0920 19:28:51.603353  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.606913  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:28:51.606987  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:28:51.652913  720494 cri.go:89] found id: "f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:28:51.652935  720494 cri.go:89] found id: ""
	I0920 19:28:51.652943  720494 logs.go:276] 1 containers: [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba]
	I0920 19:28:51.653002  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.656955  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:28:51.657040  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:28:51.704412  720494 cri.go:89] found id: "be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:28:51.704438  720494 cri.go:89] found id: ""
	I0920 19:28:51.704447  720494 logs.go:276] 1 containers: [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8]
	I0920 19:28:51.704534  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.708634  720494 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:28:51.708744  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:28:51.752746  720494 cri.go:89] found id: "4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:28:51.752776  720494 cri.go:89] found id: ""
	I0920 19:28:51.752785  720494 logs.go:276] 1 containers: [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1]
	I0920 19:28:51.752879  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:28:51.758970  720494 logs.go:123] Gathering logs for kube-apiserver [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb] ...
	I0920 19:28:51.759003  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:28:51.819975  720494 logs.go:123] Gathering logs for etcd [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56] ...
	I0920 19:28:51.820014  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:28:51.876012  720494 logs.go:123] Gathering logs for kube-proxy [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba] ...
	I0920 19:28:51.876043  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:28:51.921789  720494 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:28:51.921823  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:28:52.030887  720494 logs.go:123] Gathering logs for kube-controller-manager [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8] ...
	I0920 19:28:52.030941  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:28:52.115160  720494 logs.go:123] Gathering logs for kindnet [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1] ...
	I0920 19:28:52.115293  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:28:52.178170  720494 logs.go:123] Gathering logs for container status ...
	I0920 19:28:52.178239  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:28:52.241811  720494 logs.go:123] Gathering logs for kubelet ...
	I0920 19:28:52.241847  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 19:28:52.266767  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:28:52.267022  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:28:52.267252  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:28:52.267486  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:28:52.327738  720494 logs.go:123] Gathering logs for dmesg ...
	I0920 19:28:52.327779  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:28:52.346639  720494 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:28:52.346670  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:28:52.535292  720494 logs.go:123] Gathering logs for coredns [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78] ...
	I0920 19:28:52.535323  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:28:52.598442  720494 logs.go:123] Gathering logs for kube-scheduler [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232] ...
	I0920 19:28:52.598473  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:28:52.654339  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:28:52.654372  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 19:28:52.654455  720494 out.go:270] X Problems detected in kubelet:
	W0920 19:28:52.654469  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:28:52.654489  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:28:52.654500  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:28:52.654507  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:28:52.654512  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:28:52.654519  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:29:02.655826  720494 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:29:02.669891  720494 api_server.go:72] duration metric: took 2m37.936293093s to wait for apiserver process to appear ...
	I0920 19:29:02.669918  720494 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:29:02.669953  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:29:02.670013  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:29:02.709792  720494 cri.go:89] found id: "7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:29:02.709821  720494 cri.go:89] found id: ""
	I0920 19:29:02.709830  720494 logs.go:276] 1 containers: [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb]
	I0920 19:29:02.709905  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.713936  720494 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:29:02.714022  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:29:02.758325  720494 cri.go:89] found id: "a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:29:02.758351  720494 cri.go:89] found id: ""
	I0920 19:29:02.758360  720494 logs.go:276] 1 containers: [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56]
	I0920 19:29:02.758421  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.762432  720494 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:29:02.762517  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:29:02.816194  720494 cri.go:89] found id: "057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:29:02.816229  720494 cri.go:89] found id: ""
	I0920 19:29:02.816254  720494 logs.go:276] 1 containers: [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78]
	I0920 19:29:02.816358  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.820412  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:29:02.820495  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:29:02.868008  720494 cri.go:89] found id: "4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:29:02.868057  720494 cri.go:89] found id: ""
	I0920 19:29:02.868066  720494 logs.go:276] 1 containers: [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232]
	I0920 19:29:02.868176  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.872662  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:29:02.872784  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:29:02.922423  720494 cri.go:89] found id: "f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:29:02.922448  720494 cri.go:89] found id: ""
	I0920 19:29:02.922457  720494 logs.go:276] 1 containers: [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba]
	I0920 19:29:02.922570  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.926673  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:29:02.926808  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:29:02.974679  720494 cri.go:89] found id: "be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:29:02.974703  720494 cri.go:89] found id: ""
	I0920 19:29:02.974712  720494 logs.go:276] 1 containers: [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8]
	I0920 19:29:02.974773  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:02.978454  720494 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:29:02.978565  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:29:03.024328  720494 cri.go:89] found id: "4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:29:03.024410  720494 cri.go:89] found id: ""
	I0920 19:29:03.024433  720494 logs.go:276] 1 containers: [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1]
	I0920 19:29:03.024509  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:03.028984  720494 logs.go:123] Gathering logs for kube-proxy [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba] ...
	I0920 19:29:03.029059  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:29:03.078751  720494 logs.go:123] Gathering logs for kindnet [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1] ...
	I0920 19:29:03.078784  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:29:03.123529  720494 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:29:03.123565  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:29:03.267729  720494 logs.go:123] Gathering logs for kube-scheduler [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232] ...
	I0920 19:29:03.267765  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:29:03.319964  720494 logs.go:123] Gathering logs for kube-apiserver [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb] ...
	I0920 19:29:03.319999  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:29:03.377209  720494 logs.go:123] Gathering logs for etcd [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56] ...
	I0920 19:29:03.377254  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:29:03.430429  720494 logs.go:123] Gathering logs for coredns [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78] ...
	I0920 19:29:03.430466  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:29:03.479287  720494 logs.go:123] Gathering logs for kube-controller-manager [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8] ...
	I0920 19:29:03.479326  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:29:03.561312  720494 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:29:03.561350  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:29:03.668739  720494 logs.go:123] Gathering logs for container status ...
	I0920 19:29:03.668801  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:29:03.732250  720494 logs.go:123] Gathering logs for kubelet ...
	I0920 19:29:03.732283  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 19:29:03.763347  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:03.763596  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:29:03.763788  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:03.764019  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:29:03.824458  720494 logs.go:123] Gathering logs for dmesg ...
	I0920 19:29:03.824495  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:29:03.842781  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:29:03.842807  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 19:29:03.842859  720494 out.go:270] X Problems detected in kubelet:
	W0920 19:29:03.842874  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:03.842882  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:29:03.842891  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:03.842901  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:29:03.842906  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:29:03.842912  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:29:13.844440  720494 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:29:13.852275  720494 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 19:29:13.853640  720494 api_server.go:141] control plane version: v1.31.1
	I0920 19:29:13.853669  720494 api_server.go:131] duration metric: took 11.183744147s to wait for apiserver health ...
	I0920 19:29:13.853678  720494 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:29:13.853701  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:29:13.853773  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:29:13.894321  720494 cri.go:89] found id: "7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:29:13.894346  720494 cri.go:89] found id: ""
	I0920 19:29:13.894354  720494 logs.go:276] 1 containers: [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb]
	I0920 19:29:13.894418  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:13.898250  720494 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:29:13.898360  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:29:13.941458  720494 cri.go:89] found id: "a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:29:13.941492  720494 cri.go:89] found id: ""
	I0920 19:29:13.941500  720494 logs.go:276] 1 containers: [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56]
	I0920 19:29:13.941573  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:13.945504  720494 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:29:13.945587  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:29:13.986871  720494 cri.go:89] found id: "057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:29:13.986894  720494 cri.go:89] found id: ""
	I0920 19:29:13.986902  720494 logs.go:276] 1 containers: [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78]
	I0920 19:29:13.986962  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:13.990974  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:29:13.991061  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:29:14.034050  720494 cri.go:89] found id: "4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:29:14.034071  720494 cri.go:89] found id: ""
	I0920 19:29:14.034078  720494 logs.go:276] 1 containers: [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232]
	I0920 19:29:14.034141  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:14.038040  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:29:14.038128  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:29:14.081852  720494 cri.go:89] found id: "f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:29:14.081874  720494 cri.go:89] found id: ""
	I0920 19:29:14.081883  720494 logs.go:276] 1 containers: [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba]
	I0920 19:29:14.081944  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:14.085846  720494 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:29:14.085928  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:29:14.133064  720494 cri.go:89] found id: "be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:29:14.133089  720494 cri.go:89] found id: ""
	I0920 19:29:14.133098  720494 logs.go:276] 1 containers: [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8]
	I0920 19:29:14.133162  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:14.136964  720494 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:29:14.137069  720494 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:29:14.177123  720494 cri.go:89] found id: "4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:29:14.177146  720494 cri.go:89] found id: ""
	I0920 19:29:14.177155  720494 logs.go:276] 1 containers: [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1]
	I0920 19:29:14.177213  720494 ssh_runner.go:195] Run: which crictl
	I0920 19:29:14.180998  720494 logs.go:123] Gathering logs for kube-controller-manager [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8] ...
	I0920 19:29:14.181035  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8"
	I0920 19:29:14.260229  720494 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:29:14.260265  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:29:14.378494  720494 logs.go:123] Gathering logs for etcd [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56] ...
	I0920 19:29:14.378538  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56"
	I0920 19:29:14.437059  720494 logs.go:123] Gathering logs for coredns [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78] ...
	I0920 19:29:14.437092  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78"
	I0920 19:29:14.489260  720494 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:29:14.489292  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:29:14.630069  720494 logs.go:123] Gathering logs for kube-apiserver [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb] ...
	I0920 19:29:14.630100  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb"
	I0920 19:29:14.706585  720494 logs.go:123] Gathering logs for kube-scheduler [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232] ...
	I0920 19:29:14.706623  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232"
	I0920 19:29:14.762872  720494 logs.go:123] Gathering logs for kube-proxy [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba] ...
	I0920 19:29:14.762908  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba"
	I0920 19:29:14.812852  720494 logs.go:123] Gathering logs for kindnet [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1] ...
	I0920 19:29:14.812885  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1"
	I0920 19:29:14.865844  720494 logs.go:123] Gathering logs for container status ...
	I0920 19:29:14.865879  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:29:14.923028  720494 logs.go:123] Gathering logs for kubelet ...
	I0920 19:29:14.923065  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 19:29:14.957088  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:14.957339  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:29:14.957537  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:14.957775  720494 logs.go:138] Found kubelet problem: Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:29:15.020892  720494 logs.go:123] Gathering logs for dmesg ...
	I0920 19:29:15.020998  720494 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:29:15.055155  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:29:15.055266  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 19:29:15.055358  720494 out.go:270] X Problems detected in kubelet:
	W0920 19:29:15.055399  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728283    1514 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:15.055452  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728342    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	W0920 19:29:15.055502  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: W0920 19:27:10.728852    1514 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-244316" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-244316' and this object
	W0920 19:29:15.055543  720494 out.go:270]   Sep 20 19:27:10 addons-244316 kubelet[1514]: E0920 19:27:10.728886    1514 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-244316\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-244316' and this object" logger="UnhandledError"
	I0920 19:29:15.055594  720494 out.go:358] Setting ErrFile to fd 2...
	I0920 19:29:15.055620  720494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:29:25.080014  720494 system_pods.go:59] 18 kube-system pods found
	I0920 19:29:25.080090  720494 system_pods.go:61] "coredns-7c65d6cfc9-22l55" [f57f469f-0a10-4755-8ba7-7313badf3e97] Running
	I0920 19:29:25.080099  720494 system_pods.go:61] "csi-hostpath-attacher-0" [ede42a9c-57cd-4862-a473-bb89ae43f460] Running
	I0920 19:29:25.080104  720494 system_pods.go:61] "csi-hostpath-resizer-0" [e16bf395-29bf-4855-9bc2-e53e3fa612e9] Running
	I0920 19:29:25.080109  720494 system_pods.go:61] "csi-hostpathplugin-l9l66" [e3c46cb7-cf62-418b-8b71-c758942cced2] Running
	I0920 19:29:25.080113  720494 system_pods.go:61] "etcd-addons-244316" [c4f43849-20a5-4644-a084-aec2f01202e7] Running
	I0920 19:29:25.080249  720494 system_pods.go:61] "kindnet-62dj5" [0cef216d-8448-40df-9149-c124400377d6] Running
	I0920 19:29:25.080257  720494 system_pods.go:61] "kube-apiserver-addons-244316" [c65c8858-0a0f-424e-8135-ee436e4010d3] Running
	I0920 19:29:25.080267  720494 system_pods.go:61] "kube-controller-manager-addons-244316" [6abd01ee-fed9-4a26-8c01-19cd3b5e4d53] Running
	I0920 19:29:25.080281  720494 system_pods.go:61] "kube-ingress-dns-minikube" [d7af063e-bdd0-4bcb-916b-81ed6229b4e4] Running
	I0920 19:29:25.080286  720494 system_pods.go:61] "kube-proxy-2cdvm" [dc16595e-687e-4af7-a65b-bd9a28c49509] Running
	I0920 19:29:25.080327  720494 system_pods.go:61] "kube-scheduler-addons-244316" [f7b9623c-f0ee-4360-8f31-d3cd8cf88969] Running
	I0920 19:29:25.080346  720494 system_pods.go:61] "metrics-server-84c5f94fbc-zn5jl" [5ca001ce-a4b6-4954-bd42-f372e2f387fb] Running
	I0920 19:29:25.080381  720494 system_pods.go:61] "nvidia-device-plugin-daemonset-n79hn" [be19954c-2529-4f25-bd06-6dde36d7e9e8] Running
	I0920 19:29:25.080420  720494 system_pods.go:61] "registry-66c9cd494c-2gc7z" [c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97] Running
	I0920 19:29:25.080425  720494 system_pods.go:61] "registry-proxy-tbwxh" [6bb565a3-2192-4ce8-8582-11f1d9d8ec42] Running
	I0920 19:29:25.080430  720494 system_pods.go:61] "snapshot-controller-56fcc65765-7jw7t" [b10da70d-f5dd-46eb-993d-4973a5ac3e17] Running
	I0920 19:29:25.080456  720494 system_pods.go:61] "snapshot-controller-56fcc65765-xv9vm" [a58d3b2e-0d8e-4062-9b71-a472fa7e2fa8] Running
	I0920 19:29:25.080499  720494 system_pods.go:61] "storage-provisioner" [4ec9c5b1-c429-45cd-bc2c-9563f0f898d3] Running
	I0920 19:29:25.080507  720494 system_pods.go:74] duration metric: took 11.226821637s to wait for pod list to return data ...
	I0920 19:29:25.080520  720494 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:29:25.084074  720494 default_sa.go:45] found service account: "default"
	I0920 19:29:25.084118  720494 default_sa.go:55] duration metric: took 3.588373ms for default service account to be created ...
	I0920 19:29:25.084130  720494 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:29:25.098531  720494 system_pods.go:86] 18 kube-system pods found
	I0920 19:29:25.098691  720494 system_pods.go:89] "coredns-7c65d6cfc9-22l55" [f57f469f-0a10-4755-8ba7-7313badf3e97] Running
	I0920 19:29:25.098718  720494 system_pods.go:89] "csi-hostpath-attacher-0" [ede42a9c-57cd-4862-a473-bb89ae43f460] Running
	I0920 19:29:25.098741  720494 system_pods.go:89] "csi-hostpath-resizer-0" [e16bf395-29bf-4855-9bc2-e53e3fa612e9] Running
	I0920 19:29:25.098764  720494 system_pods.go:89] "csi-hostpathplugin-l9l66" [e3c46cb7-cf62-418b-8b71-c758942cced2] Running
	I0920 19:29:25.098787  720494 system_pods.go:89] "etcd-addons-244316" [c4f43849-20a5-4644-a084-aec2f01202e7] Running
	I0920 19:29:25.098799  720494 system_pods.go:89] "kindnet-62dj5" [0cef216d-8448-40df-9149-c124400377d6] Running
	I0920 19:29:25.098808  720494 system_pods.go:89] "kube-apiserver-addons-244316" [c65c8858-0a0f-424e-8135-ee436e4010d3] Running
	I0920 19:29:25.098814  720494 system_pods.go:89] "kube-controller-manager-addons-244316" [6abd01ee-fed9-4a26-8c01-19cd3b5e4d53] Running
	I0920 19:29:25.098820  720494 system_pods.go:89] "kube-ingress-dns-minikube" [d7af063e-bdd0-4bcb-916b-81ed6229b4e4] Running
	I0920 19:29:25.098824  720494 system_pods.go:89] "kube-proxy-2cdvm" [dc16595e-687e-4af7-a65b-bd9a28c49509] Running
	I0920 19:29:25.098829  720494 system_pods.go:89] "kube-scheduler-addons-244316" [f7b9623c-f0ee-4360-8f31-d3cd8cf88969] Running
	I0920 19:29:25.098833  720494 system_pods.go:89] "metrics-server-84c5f94fbc-zn5jl" [5ca001ce-a4b6-4954-bd42-f372e2f387fb] Running
	I0920 19:29:25.098839  720494 system_pods.go:89] "nvidia-device-plugin-daemonset-n79hn" [be19954c-2529-4f25-bd06-6dde36d7e9e8] Running
	I0920 19:29:25.098847  720494 system_pods.go:89] "registry-66c9cd494c-2gc7z" [c5629ec4-4a53-45e1-b6f9-a4b1f7c77d97] Running
	I0920 19:29:25.098851  720494 system_pods.go:89] "registry-proxy-tbwxh" [6bb565a3-2192-4ce8-8582-11f1d9d8ec42] Running
	I0920 19:29:25.098858  720494 system_pods.go:89] "snapshot-controller-56fcc65765-7jw7t" [b10da70d-f5dd-46eb-993d-4973a5ac3e17] Running
	I0920 19:29:25.098862  720494 system_pods.go:89] "snapshot-controller-56fcc65765-xv9vm" [a58d3b2e-0d8e-4062-9b71-a472fa7e2fa8] Running
	I0920 19:29:25.098869  720494 system_pods.go:89] "storage-provisioner" [4ec9c5b1-c429-45cd-bc2c-9563f0f898d3] Running
	I0920 19:29:25.098878  720494 system_pods.go:126] duration metric: took 14.740845ms to wait for k8s-apps to be running ...
	I0920 19:29:25.098891  720494 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:29:25.098960  720494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:29:25.113514  720494 system_svc.go:56] duration metric: took 14.611289ms WaitForService to wait for kubelet
	I0920 19:29:25.113546  720494 kubeadm.go:582] duration metric: took 3m0.379953199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:29:25.113573  720494 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:29:25.118070  720494 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 19:29:25.118139  720494 node_conditions.go:123] node cpu capacity is 2
	I0920 19:29:25.118151  720494 node_conditions.go:105] duration metric: took 4.571143ms to run NodePressure ...
	I0920 19:29:25.118164  720494 start.go:241] waiting for startup goroutines ...
	I0920 19:29:25.118172  720494 start.go:246] waiting for cluster config update ...
	I0920 19:29:25.118187  720494 start.go:255] writing updated cluster config ...
	I0920 19:29:25.118506  720494 ssh_runner.go:195] Run: rm -f paused
	I0920 19:29:25.476128  720494 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:29:25.479298  720494 out.go:177] * Done! kubectl is now configured to use "addons-244316" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 19:42:21 addons-244316 crio[966]: time="2024-09-20 19:42:21.314970555Z" level=info msg="Removed pod sandbox: ee54f491631422ff739c12db6ec644126f042ef37562ccfc78b96ca05e5db6cd" id=99da2a3d-4c1d-4f24-940c-67ba218cb267 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 19:42:28 addons-244316 crio[966]: time="2024-09-20 19:42:28.568682916Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2d8258de-cf97-4f41-8433-02dd1c2f4b62 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:42:28 addons-244316 crio[966]: time="2024-09-20 19:42:28.569169797Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2d8258de-cf97-4f41-8433-02dd1c2f4b62 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:42:39 addons-244316 crio[966]: time="2024-09-20 19:42:39.568992026Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=71f7ace4-fdca-42f3-999a-ee0fdfe881de name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:42:39 addons-244316 crio[966]: time="2024-09-20 19:42:39.569293320Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=71f7ace4-fdca-42f3-999a-ee0fdfe881de name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:42:54 addons-244316 crio[966]: time="2024-09-20 19:42:54.569242433Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f71fc2b5-6ac2-4fbd-9f96-0a9c70e141b8 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:42:54 addons-244316 crio[966]: time="2024-09-20 19:42:54.569491379Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f71fc2b5-6ac2-4fbd-9f96-0a9c70e141b8 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:43:07 addons-244316 crio[966]: time="2024-09-20 19:43:07.569135719Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e726114f-654a-4ef1-a0cd-545967e81131 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:43:07 addons-244316 crio[966]: time="2024-09-20 19:43:07.569407442Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e726114f-654a-4ef1-a0cd-545967e81131 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:43:21 addons-244316 crio[966]: time="2024-09-20 19:43:21.568843763Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b6c95320-67b9-4081-b974-22bbd179abd2 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:43:21 addons-244316 crio[966]: time="2024-09-20 19:43:21.569099643Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b6c95320-67b9-4081-b974-22bbd179abd2 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:43:36 addons-244316 crio[966]: time="2024-09-20 19:43:36.569248455Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8933836e-1a5a-453f-93b6-3303200cb3fe name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:43:36 addons-244316 crio[966]: time="2024-09-20 19:43:36.571416940Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8933836e-1a5a-453f-93b6-3303200cb3fe name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:43:50 addons-244316 crio[966]: time="2024-09-20 19:43:50.570590323Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b708256f-8ac3-4ef9-9d11-a3816a99a85c name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:43:50 addons-244316 crio[966]: time="2024-09-20 19:43:50.570902186Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b708256f-8ac3-4ef9-9d11-a3816a99a85c name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:44:03 addons-244316 crio[966]: time="2024-09-20 19:44:03.569375656Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ebb44a5c-8f01-4dbd-a02d-16fcecbc0cff name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:44:03 addons-244316 crio[966]: time="2024-09-20 19:44:03.569643655Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ebb44a5c-8f01-4dbd-a02d-16fcecbc0cff name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:44:07 addons-244316 crio[966]: time="2024-09-20 19:44:07.074573125Z" level=info msg="Stopping container: fe7b8006fe5dadad1bd65c846d3071027994b1a9573c2a725d337f705e5ac9ce (timeout: 30s)" id=9e8f8d47-e2e2-4684-8396-d636f476a49f name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:44:08 addons-244316 crio[966]: time="2024-09-20 19:44:08.331440958Z" level=info msg="Stopped container fe7b8006fe5dadad1bd65c846d3071027994b1a9573c2a725d337f705e5ac9ce: kube-system/metrics-server-84c5f94fbc-zn5jl/metrics-server" id=9e8f8d47-e2e2-4684-8396-d636f476a49f name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:44:08 addons-244316 crio[966]: time="2024-09-20 19:44:08.332451753Z" level=info msg="Stopping pod sandbox: 22cef3edf18b3d7fb91d6f327bf3505a2fc6fb90f37b14f18d60b931899eecd4" id=47083b55-ff41-49c8-9d28-872cff7cd6aa name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:44:08 addons-244316 crio[966]: time="2024-09-20 19:44:08.332767046Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-zn5jl Namespace:kube-system ID:22cef3edf18b3d7fb91d6f327bf3505a2fc6fb90f37b14f18d60b931899eecd4 UID:5ca001ce-a4b6-4954-bd42-f372e2f387fb NetNS:/var/run/netns/d95fb418-4f80-4146-a3d5-f98ff5336db8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 20 19:44:08 addons-244316 crio[966]: time="2024-09-20 19:44:08.332927395Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-zn5jl from CNI network \"kindnet\" (type=ptp)"
	Sep 20 19:44:08 addons-244316 crio[966]: time="2024-09-20 19:44:08.361305456Z" level=info msg="Stopped pod sandbox: 22cef3edf18b3d7fb91d6f327bf3505a2fc6fb90f37b14f18d60b931899eecd4" id=47083b55-ff41-49c8-9d28-872cff7cd6aa name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:44:08 addons-244316 crio[966]: time="2024-09-20 19:44:08.485403910Z" level=info msg="Removing container: fe7b8006fe5dadad1bd65c846d3071027994b1a9573c2a725d337f705e5ac9ce" id=e114aadf-06c5-4140-97ae-5c027bd4bca3 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:44:08 addons-244316 crio[966]: time="2024-09-20 19:44:08.506393826Z" level=info msg="Removed container fe7b8006fe5dadad1bd65c846d3071027994b1a9573c2a725d337f705e5ac9ce: kube-system/metrics-server-84c5f94fbc-zn5jl/metrics-server" id=e114aadf-06c5-4140-97ae-5c027bd4bca3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c0e6f6aec9e54       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6              2 minutes ago       Running             hello-world-app           0                   b79e0f239ddfc       hello-world-app-55bf9c44b4-mg7cv
	6e0b4a9414739       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                    5 minutes ago       Running             nginx                     0                   dc3e5360e6ba0       nginx
	d315e9086557b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69       16 minutes ago      Running             gcp-auth                  0                   c6f74f4e64606       gcp-auth-89d5ffd79-d2tpp
	37386e7680939       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98   16 minutes ago      Running             local-path-provisioner    0                   cc709ac8a6230       local-path-provisioner-86d989889c-fzcjl
	c524ed738a8d3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                   16 minutes ago      Running             storage-provisioner       0                   780e530cacd2f       storage-provisioner
	057fc4f7aad90       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                   16 minutes ago      Running             coredns                   0                   19fae96941a4c       coredns-7c65d6cfc9-22l55
	f693c5f3d507b       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                   17 minutes ago      Running             kube-proxy                0                   cc62ba102a745       kube-proxy-2cdvm
	4321d12c79ddf       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                   17 minutes ago      Running             kindnet-cni               0                   b90c147beb0ad       kindnet-62dj5
	4d724338eea34       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                   17 minutes ago      Running             kube-scheduler            0                   05db024319aa0       kube-scheduler-addons-244316
	be05ccc3ccb37       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                   17 minutes ago      Running             kube-controller-manager   0                   6b477bdf2c558       kube-controller-manager-addons-244316
	7df0e0b9e62ff       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                   17 minutes ago      Running             kube-apiserver            0                   ac32244a5406b       kube-apiserver-addons-244316
	a6f3359b2e88b       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                   17 minutes ago      Running             etcd                      0                   1c0ae6d7145c8       etcd-addons-244316
	
	
	==> coredns [057fc4f7aad908f542fa61fcb193d0457d30d6afc8f8e5d9df9e759333865a78] <==
	[INFO] 10.244.0.15:59114 - 56514 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104129s
	[INFO] 10.244.0.15:50932 - 11460 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002666137s
	[INFO] 10.244.0.15:50932 - 39626 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002906394s
	[INFO] 10.244.0.15:33525 - 33323 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000546449s
	[INFO] 10.244.0.15:33525 - 45348 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000577833s
	[INFO] 10.244.0.15:59699 - 23607 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00012291s
	[INFO] 10.244.0.15:59699 - 54075 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000179384s
	[INFO] 10.244.0.15:32831 - 28558 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072893s
	[INFO] 10.244.0.15:32831 - 18096 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000140543s
	[INFO] 10.244.0.15:45505 - 40088 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101889s
	[INFO] 10.244.0.15:45505 - 32415 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000152037s
	[INFO] 10.244.0.15:34547 - 57598 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001603466s
	[INFO] 10.244.0.15:34547 - 49347 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001676244s
	[INFO] 10.244.0.15:39827 - 28188 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076618s
	[INFO] 10.244.0.15:39827 - 45592 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000050157s
	[INFO] 10.244.0.20:47707 - 16074 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002686658s
	[INFO] 10.244.0.20:46427 - 23800 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00268235s
	[INFO] 10.244.0.20:57231 - 19877 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157937s
	[INFO] 10.244.0.20:45688 - 62216 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000097565s
	[INFO] 10.244.0.20:33274 - 51885 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125626s
	[INFO] 10.244.0.20:49302 - 18918 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092175s
	[INFO] 10.244.0.20:49895 - 24635 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00244512s
	[INFO] 10.244.0.20:44018 - 55406 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002061548s
	[INFO] 10.244.0.20:38373 - 29636 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000956351s
	[INFO] 10.244.0.20:33201 - 4012 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000749012s
	
	
	==> describe nodes <==
	Name:               addons-244316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-244316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-244316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T19_26_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-244316
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:26:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-244316
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:44:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:41:58 +0000   Fri, 20 Sep 2024 19:26:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:41:58 +0000   Fri, 20 Sep 2024 19:26:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:41:58 +0000   Fri, 20 Sep 2024 19:26:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:41:58 +0000   Fri, 20 Sep 2024 19:27:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-244316
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 545b19fe9bdc45b392d49f2b91832698
	  System UUID:                ef4c1a4b-0c08-44ed-8fa8-b5206cbb0701
	  Boot ID:                    7d682649-b07c-44b5-a0a6-3c50df538ea4
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-mg7cv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m53s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  gcp-auth                    gcp-auth-89d5ffd79-d2tpp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-7c65d6cfc9-22l55                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-addons-244316                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-62dj5                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-addons-244316               250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-244316      200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-2cdvm                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-244316               100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  local-path-storage          local-path-provisioner-86d989889c-fzcjl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 17m   kube-proxy       
	  Normal   Starting                 17m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m   kubelet          Node addons-244316 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m   kubelet          Node addons-244316 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m   kubelet          Node addons-244316 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m   node-controller  Node addons-244316 event: Registered Node addons-244316 in Controller
	  Normal   NodeReady                16m   kubelet          Node addons-244316 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 18:56] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	[Sep20 19:09] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	[Sep20 19:16] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	
	
	==> etcd [a6f3359b2e88be29f122ce6eb0f2840d01a010e329a55db76f271d9db7a02f56] <==
	{"level":"info","ts":"2024-09-20T19:26:15.496978Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:26:15.497337Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:26:15.498258Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:26:15.499377Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T19:26:15.499863Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:26:15.499980Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:26:15.500310Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:26:15.500373Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:26:15.505603Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:26:15.506814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T19:26:15.513007Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:26:15.513100Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T19:26:26.241888Z","caller":"traceutil/trace.go:171","msg":"trace[2026199184] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"100.101501ms","start":"2024-09-20T19:26:26.141768Z","end":"2024-09-20T19:26:26.241870Z","steps":["trace[2026199184] 'process raft request'  (duration: 39.701056ms)","trace[2026199184] 'compare'  (duration: 60.307606ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T19:26:27.471528Z","caller":"traceutil/trace.go:171","msg":"trace[1882640504] transaction","detail":"{read_only:false; response_revision:313; number_of_response:1; }","duration":"106.638225ms","start":"2024-09-20T19:26:27.364872Z","end":"2024-09-20T19:26:27.471511Z","steps":["trace[1882640504] 'process raft request'  (duration: 59.29789ms)","trace[1882640504] 'compare'  (duration: 46.971064ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T19:26:27.526046Z","caller":"traceutil/trace.go:171","msg":"trace[973402408] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"107.82845ms","start":"2024-09-20T19:26:27.418197Z","end":"2024-09-20T19:26:27.526026Z","steps":["trace[973402408] 'process raft request'  (duration: 53.066249ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:26:27.571075Z","caller":"traceutil/trace.go:171","msg":"trace[2112418004] transaction","detail":"{read_only:false; response_revision:315; number_of_response:1; }","duration":"122.751557ms","start":"2024-09-20T19:26:27.448305Z","end":"2024-09-20T19:26:27.571056Z","steps":["trace[2112418004] 'process raft request'  (duration: 119.526598ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:26:28.500251Z","caller":"traceutil/trace.go:171","msg":"trace[95206656] transaction","detail":"{read_only:false; response_revision:323; number_of_response:1; }","duration":"174.74035ms","start":"2024-09-20T19:26:28.325306Z","end":"2024-09-20T19:26:28.500046Z","steps":["trace[95206656] 'process raft request'  (duration: 120.086848ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:26:28.701213Z","caller":"traceutil/trace.go:171","msg":"trace[792115432] transaction","detail":"{read_only:false; response_revision:324; number_of_response:1; }","duration":"135.97743ms","start":"2024-09-20T19:26:28.565220Z","end":"2024-09-20T19:26:28.701197Z","steps":["trace[792115432] 'process raft request'  (duration: 135.866171ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:26:28.721747Z","caller":"traceutil/trace.go:171","msg":"trace[634577408] transaction","detail":"{read_only:false; response_revision:325; number_of_response:1; }","duration":"148.304093ms","start":"2024-09-20T19:26:28.573386Z","end":"2024-09-20T19:26:28.721690Z","steps":["trace[634577408] 'process raft request'  (duration: 147.53831ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:36:15.774535Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1499}
	{"level":"info","ts":"2024-09-20T19:36:15.811553Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1499,"took":"36.293092ms","hash":1427089083,"current-db-size-bytes":6217728,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3289088,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-20T19:36:15.811629Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1427089083,"revision":1499,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T19:41:15.781293Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1917}
	{"level":"info","ts":"2024-09-20T19:41:15.800026Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1917,"took":"18.222452ms","hash":2481227134,"current-db-size-bytes":6217728,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":4194304,"current-db-size-in-use":"4.2 MB"}
	{"level":"info","ts":"2024-09-20T19:41:15.800387Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2481227134,"revision":1917,"compact-revision":1499}
	
	
	==> gcp-auth [d315e9086557bcb438ba82c9c8029a5fa6eb5ca36d005581c58a6149197ccc08] <==
	2024/09/20 19:29:25 Ready to write response ...
	2024/09/20 19:29:25 Ready to marshal response ...
	2024/09/20 19:29:25 Ready to write response ...
	2024/09/20 19:37:30 Ready to marshal response ...
	2024/09/20 19:37:30 Ready to write response ...
	2024/09/20 19:37:30 Ready to marshal response ...
	2024/09/20 19:37:30 Ready to write response ...
	2024/09/20 19:37:30 Ready to marshal response ...
	2024/09/20 19:37:30 Ready to write response ...
	2024/09/20 19:37:39 Ready to marshal response ...
	2024/09/20 19:37:39 Ready to write response ...
	2024/09/20 19:38:07 Ready to marshal response ...
	2024/09/20 19:38:07 Ready to write response ...
	2024/09/20 19:38:22 Ready to marshal response ...
	2024/09/20 19:38:22 Ready to write response ...
	2024/09/20 19:38:54 Ready to marshal response ...
	2024/09/20 19:38:54 Ready to write response ...
	2024/09/20 19:41:15 Ready to marshal response ...
	2024/09/20 19:41:15 Ready to write response ...
	2024/09/20 19:41:28 Ready to marshal response ...
	2024/09/20 19:41:28 Ready to write response ...
	2024/09/20 19:41:28 Ready to marshal response ...
	2024/09/20 19:41:28 Ready to write response ...
	2024/09/20 19:41:36 Ready to marshal response ...
	2024/09/20 19:41:36 Ready to write response ...
	
	
	==> kernel <==
	 19:44:08 up  3:26,  0 users,  load average: 0.33, 0.45, 1.35
	Linux addons-244316 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4321d12c79ddfb47852742949f467a55f8de9a7d95a77e53d083b464f366e8b1] <==
	I0920 19:42:00.049975       1 main.go:299] handling current node
	I0920 19:42:10.057241       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:42:10.057287       1 main.go:299] handling current node
	I0920 19:42:20.053484       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:42:20.053642       1 main.go:299] handling current node
	I0920 19:42:30.051017       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:42:30.051073       1 main.go:299] handling current node
	I0920 19:42:40.049748       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:42:40.049887       1 main.go:299] handling current node
	I0920 19:42:50.057722       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:42:50.057781       1 main.go:299] handling current node
	I0920 19:43:00.052917       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:43:00.053080       1 main.go:299] handling current node
	I0920 19:43:10.049749       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:43:10.049786       1 main.go:299] handling current node
	I0920 19:43:20.057166       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:43:20.057213       1 main.go:299] handling current node
	I0920 19:43:30.050574       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:43:30.050624       1 main.go:299] handling current node
	I0920 19:43:40.049776       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:43:40.049924       1 main.go:299] handling current node
	I0920 19:43:50.057020       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:43:50.057168       1 main.go:299] handling current node
	I0920 19:44:00.055310       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:44:00.055367       1 main.go:299] handling current node
	
	
	==> kube-apiserver [7df0e0b9e62ff4475603b112ee628a4012e4568a8a571d8cc2c36005905f16eb] <==
	 > logger="UnhandledError"
	E0920 19:28:51.046775       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.3.33:443: connect: connection refused" logger="UnhandledError"
	E0920 19:28:51.048849       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.3.33:443: connect: connection refused" logger="UnhandledError"
	E0920 19:28:51.053974       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.3.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.3.33:443: connect: connection refused" logger="UnhandledError"
	I0920 19:28:51.140284       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 19:37:30.422760       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.183.123"}
	I0920 19:38:18.972262       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 19:38:38.722956       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.723050       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:38:38.790058       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.790113       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:38:38.821684       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.822017       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:38:38.825093       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.825209       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:38:38.857263       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:38:38.857387       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 19:38:39.823647       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 19:38:39.858807       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 19:38:39.873190       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0920 19:38:49.060313       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 19:38:50.091711       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 19:38:54.713979       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 19:38:55.034820       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.138.158"}
	I0920 19:41:15.565906       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.94.217"}
	
	
	==> kube-controller-manager [be05ccc3ccb371aa450d99f2c8126306768c051793c148da852c2a6a78b4b1b8] <==
	W0920 19:42:22.450023       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:42:22.450063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:42:22.968295       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:42:22.968342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:42:32.273197       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:42:32.273241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:42:38.164137       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:42:38.164183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:42:53.161094       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:42:53.161137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:43:14.994575       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:43:14.994628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:43:19.474758       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:43:19.474807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:43:23.578819       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:43:23.578866       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:43:45.709918       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:43:45.709972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:43:55.433083       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:43:55.433145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:44:07.043624       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:44:07.043804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 19:44:07.054444       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="4.603µs"
	W0920 19:44:07.287947       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:44:07.288081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [f693c5f3d507b1874cc82923af3463add62f354b3908288cd03db55a64a09bba] <==
	I0920 19:26:30.694959       1 server_linux.go:66] "Using iptables proxy"
	I0920 19:26:30.941347       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 19:26:30.941501       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:26:31.113765       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 19:26:31.114389       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:26:31.247312       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:26:31.248577       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:26:31.248685       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:26:31.285254       1 config.go:199] "Starting service config controller"
	I0920 19:26:31.285292       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:26:31.285317       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:26:31.285321       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:26:31.285701       1 config.go:328] "Starting node config controller"
	I0920 19:26:31.285721       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:26:31.386427       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:26:31.388471       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:26:31.386092       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4d724338eea34eac2f06f8c5c2953f37748902d61c89a1a85be0738231dec232] <==
	W0920 19:26:17.922834       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 19:26:17.922851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:17.922893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 19:26:17.922937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:17.922953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0920 19:26:17.923002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 19:26:17.923021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 19:26:17.923044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:17.922914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 19:26:17.923121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:17.923082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 19:26:17.923221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:18.745705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 19:26:18.745830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:18.753046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 19:26:18.753087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:18.822086       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 19:26:18.822126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:18.825544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:26:18.825670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:19.036881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 19:26:19.036999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 19:26:19.047421       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 19:26:19.047533       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 19:26:21.817013       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:43:20 addons-244316 kubelet[1514]: E0920 19:43:20.943173    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861400942849523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572291,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:43:20 addons-244316 kubelet[1514]: E0920 19:43:20.943224    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861400942849523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572291,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:43:21 addons-244316 kubelet[1514]: E0920 19:43:21.569386    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="26ebf772-d5b9-4d72-93d5-706cab403777"
	Sep 20 19:43:30 addons-244316 kubelet[1514]: E0920 19:43:30.945741    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861410945489684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572291,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:43:30 addons-244316 kubelet[1514]: E0920 19:43:30.945779    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861410945489684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572291,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:43:36 addons-244316 kubelet[1514]: E0920 19:43:36.573058    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="26ebf772-d5b9-4d72-93d5-706cab403777"
	Sep 20 19:43:40 addons-244316 kubelet[1514]: E0920 19:43:40.948922    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861420948614269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572291,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:43:40 addons-244316 kubelet[1514]: E0920 19:43:40.948961    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861420948614269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572291,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:43:50 addons-244316 kubelet[1514]: E0920 19:43:50.571432    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="26ebf772-d5b9-4d72-93d5-706cab403777"
	Sep 20 19:43:50 addons-244316 kubelet[1514]: E0920 19:43:50.952015    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861430951755592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572291,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:43:50 addons-244316 kubelet[1514]: E0920 19:43:50.952051    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861430951755592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572291,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:44:00 addons-244316 kubelet[1514]: E0920 19:44:00.954667    1514 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861440954391518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572291,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:44:00 addons-244316 kubelet[1514]: E0920 19:44:00.954729    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726861440954391518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572291,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:44:03 addons-244316 kubelet[1514]: E0920 19:44:03.570060    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="26ebf772-d5b9-4d72-93d5-706cab403777"
	Sep 20 19:44:08 addons-244316 kubelet[1514]: I0920 19:44:08.402943    1514 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5ca001ce-a4b6-4954-bd42-f372e2f387fb-tmp-dir\") pod \"5ca001ce-a4b6-4954-bd42-f372e2f387fb\" (UID: \"5ca001ce-a4b6-4954-bd42-f372e2f387fb\") "
	Sep 20 19:44:08 addons-244316 kubelet[1514]: I0920 19:44:08.403010    1514 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjjqh\" (UniqueName: \"kubernetes.io/projected/5ca001ce-a4b6-4954-bd42-f372e2f387fb-kube-api-access-cjjqh\") pod \"5ca001ce-a4b6-4954-bd42-f372e2f387fb\" (UID: \"5ca001ce-a4b6-4954-bd42-f372e2f387fb\") "
	Sep 20 19:44:08 addons-244316 kubelet[1514]: I0920 19:44:08.403629    1514 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ca001ce-a4b6-4954-bd42-f372e2f387fb-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "5ca001ce-a4b6-4954-bd42-f372e2f387fb" (UID: "5ca001ce-a4b6-4954-bd42-f372e2f387fb"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 20 19:44:08 addons-244316 kubelet[1514]: I0920 19:44:08.414130    1514 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca001ce-a4b6-4954-bd42-f372e2f387fb-kube-api-access-cjjqh" (OuterVolumeSpecName: "kube-api-access-cjjqh") pod "5ca001ce-a4b6-4954-bd42-f372e2f387fb" (UID: "5ca001ce-a4b6-4954-bd42-f372e2f387fb"). InnerVolumeSpecName "kube-api-access-cjjqh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 19:44:08 addons-244316 kubelet[1514]: I0920 19:44:08.481204    1514 scope.go:117] "RemoveContainer" containerID="fe7b8006fe5dadad1bd65c846d3071027994b1a9573c2a725d337f705e5ac9ce"
	Sep 20 19:44:08 addons-244316 kubelet[1514]: I0920 19:44:08.506542    1514 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5ca001ce-a4b6-4954-bd42-f372e2f387fb-tmp-dir\") on node \"addons-244316\" DevicePath \"\""
	Sep 20 19:44:08 addons-244316 kubelet[1514]: I0920 19:44:08.508036    1514 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cjjqh\" (UniqueName: \"kubernetes.io/projected/5ca001ce-a4b6-4954-bd42-f372e2f387fb-kube-api-access-cjjqh\") on node \"addons-244316\" DevicePath \"\""
	Sep 20 19:44:08 addons-244316 kubelet[1514]: I0920 19:44:08.507985    1514 scope.go:117] "RemoveContainer" containerID="fe7b8006fe5dadad1bd65c846d3071027994b1a9573c2a725d337f705e5ac9ce"
	Sep 20 19:44:08 addons-244316 kubelet[1514]: E0920 19:44:08.508866    1514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7b8006fe5dadad1bd65c846d3071027994b1a9573c2a725d337f705e5ac9ce\": container with ID starting with fe7b8006fe5dadad1bd65c846d3071027994b1a9573c2a725d337f705e5ac9ce not found: ID does not exist" containerID="fe7b8006fe5dadad1bd65c846d3071027994b1a9573c2a725d337f705e5ac9ce"
	Sep 20 19:44:08 addons-244316 kubelet[1514]: I0920 19:44:08.508911    1514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7b8006fe5dadad1bd65c846d3071027994b1a9573c2a725d337f705e5ac9ce"} err="failed to get container status \"fe7b8006fe5dadad1bd65c846d3071027994b1a9573c2a725d337f705e5ac9ce\": rpc error: code = NotFound desc = could not find container \"fe7b8006fe5dadad1bd65c846d3071027994b1a9573c2a725d337f705e5ac9ce\": container with ID starting with fe7b8006fe5dadad1bd65c846d3071027994b1a9573c2a725d337f705e5ac9ce not found: ID does not exist"
	Sep 20 19:44:08 addons-244316 kubelet[1514]: I0920 19:44:08.570046    1514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca001ce-a4b6-4954-bd42-f372e2f387fb" path="/var/lib/kubelet/pods/5ca001ce-a4b6-4954-bd42-f372e2f387fb/volumes"
	
	
	==> storage-provisioner [c524ed738a8d38b9f6bd037c1dc8d7fef60bc2f2cd8fb0f684e4eb386bf75f67] <==
	I0920 19:27:11.555337       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 19:27:11.604952       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 19:27:11.605114       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 19:27:11.637195       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 19:27:11.637419       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-244316_90ec2255-c73c-4224-95cd-667ebf7eeaa4!
	I0920 19:27:11.637476       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"064fe9f7-ba2a-47d4-ac4c-01438c7426a0", APIVersion:"v1", ResourceVersion:"888", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-244316_90ec2255-c73c-4224-95cd-667ebf7eeaa4 became leader
	I0920 19:27:11.737871       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-244316_90ec2255-c73c-4224-95cd-667ebf7eeaa4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-244316 -n addons-244316
helpers_test.go:261: (dbg) Run:  kubectl --context addons-244316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-244316 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-244316 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-244316/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 19:29:25 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x65mx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x65mx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/busybox to addons-244316
	  Normal   Pulling    13m (x4 over 14m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     13m (x4 over 14m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     13m (x4 over 14m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m38s (x42 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (331.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (137.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-688277 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0920 19:57:30.508793  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:57:58.218619  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:59:26.006914  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-688277 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m12.063746395s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-688277       NotReady   control-plane   10m     v1.31.1
	ha-688277-m02   Ready      control-plane   10m     v1.31.1
	ha-688277-m04   Ready      <none>          7m46s   v1.31.1

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-688277
helpers_test.go:235: (dbg) docker inspect ha-688277:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5961ba43cb33c469ae1bae1ac1a7cd8f88f8553016ef942103ee6b9be5b14c7c",
	        "Created": "2024-09-20T19:48:34.360280555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 780838,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T19:57:28.096250529Z",
	            "FinishedAt": "2024-09-20T19:57:27.133824834Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/5961ba43cb33c469ae1bae1ac1a7cd8f88f8553016ef942103ee6b9be5b14c7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5961ba43cb33c469ae1bae1ac1a7cd8f88f8553016ef942103ee6b9be5b14c7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5961ba43cb33c469ae1bae1ac1a7cd8f88f8553016ef942103ee6b9be5b14c7c/hosts",
	        "LogPath": "/var/lib/docker/containers/5961ba43cb33c469ae1bae1ac1a7cd8f88f8553016ef942103ee6b9be5b14c7c/5961ba43cb33c469ae1bae1ac1a7cd8f88f8553016ef942103ee6b9be5b14c7c-json.log",
	        "Name": "/ha-688277",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-688277:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-688277",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/76093cdb85b5730d0facd9577a7e9204351662d86f84384dd49e8123f0c33d4d-init/diff:/var/lib/docker/overlay2/abb52e4f5a7bf897f28cf92e83fcbaaa3eeab65622f14fe44da11027a9deb44f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/76093cdb85b5730d0facd9577a7e9204351662d86f84384dd49e8123f0c33d4d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/76093cdb85b5730d0facd9577a7e9204351662d86f84384dd49e8123f0c33d4d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/76093cdb85b5730d0facd9577a7e9204351662d86f84384dd49e8123f0c33d4d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-688277",
	                "Source": "/var/lib/docker/volumes/ha-688277/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-688277",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-688277",
	                "name.minikube.sigs.k8s.io": "ha-688277",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6be29a8f9db818974e2f0c2ac8b4b739e1b767de3a035dc981a0471def3a95c6",
	            "SandboxKey": "/var/run/docker/netns/6be29a8f9db8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-688277": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e800fa7f1d9b3d3af9a43fc82aec0b6adf10d4a6e089ef6cb5fa313bc828859b",
	                    "EndpointID": "a5a6b5940fcf892dfff5bcbf887c5d812b3782d97378cfc2fcc77f9a94e39990",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-688277",
	                        "5961ba43cb33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-688277 -n ha-688277
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-688277 logs -n 25: (2.238239741s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-688277 cp ha-688277-m03:/home/docker/cp-test.txt                              | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | ha-688277-m04:/home/docker/cp-test_ha-688277-m03_ha-688277-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-688277 ssh -n                                                                 | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | ha-688277-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-688277 ssh -n ha-688277-m04 sudo cat                                          | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | /home/docker/cp-test_ha-688277-m03_ha-688277-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-688277 cp testdata/cp-test.txt                                                | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | ha-688277-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-688277 ssh -n                                                                 | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | ha-688277-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-688277 cp ha-688277-m04:/home/docker/cp-test.txt                              | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2586449424/001/cp-test_ha-688277-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-688277 ssh -n                                                                 | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | ha-688277-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-688277 cp ha-688277-m04:/home/docker/cp-test.txt                              | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | ha-688277:/home/docker/cp-test_ha-688277-m04_ha-688277.txt                       |           |         |         |                     |                     |
	| ssh     | ha-688277 ssh -n                                                                 | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | ha-688277-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-688277 ssh -n ha-688277 sudo cat                                              | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | /home/docker/cp-test_ha-688277-m04_ha-688277.txt                                 |           |         |         |                     |                     |
	| cp      | ha-688277 cp ha-688277-m04:/home/docker/cp-test.txt                              | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | ha-688277-m02:/home/docker/cp-test_ha-688277-m04_ha-688277-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-688277 ssh -n                                                                 | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | ha-688277-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-688277 ssh -n ha-688277-m02 sudo cat                                          | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | /home/docker/cp-test_ha-688277-m04_ha-688277-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-688277 cp ha-688277-m04:/home/docker/cp-test.txt                              | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | ha-688277-m03:/home/docker/cp-test_ha-688277-m04_ha-688277-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-688277 ssh -n                                                                 | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | ha-688277-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-688277 ssh -n ha-688277-m03 sudo cat                                          | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | /home/docker/cp-test_ha-688277-m04_ha-688277-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-688277 node stop m02 -v=7                                                     | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-688277 node start m02 -v=7                                                    | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:52 UTC | 20 Sep 24 19:53 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-688277 -v=7                                                           | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-688277 -v=7                                                                | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:53 UTC | 20 Sep 24 19:53 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-688277 --wait=true -v=7                                                    | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:53 UTC | 20 Sep 24 19:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-688277                                                                | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:56 UTC |                     |
	| node    | ha-688277 node delete m03 -v=7                                                   | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:56 UTC | 20 Sep 24 19:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-688277 stop -v=7                                                              | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:56 UTC | 20 Sep 24 19:57 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-688277 --wait=true                                                         | ha-688277 | jenkins | v1.34.0 | 20 Sep 24 19:57 UTC | 20 Sep 24 19:59 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:57:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:57:27.608308  780633 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:57:27.608548  780633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:57:27.608561  780633 out.go:358] Setting ErrFile to fd 2...
	I0920 19:57:27.608567  780633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:57:27.608829  780633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
	I0920 19:57:27.609214  780633 out.go:352] Setting JSON to false
	I0920 19:57:27.610018  780633 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":13197,"bootTime":1726849051,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 19:57:27.610138  780633 start.go:139] virtualization:  
	I0920 19:57:27.613375  780633 out.go:177] * [ha-688277] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:57:27.616623  780633 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:57:27.616841  780633 notify.go:220] Checking for updates...
	I0920 19:57:27.621819  780633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:57:27.624371  780633 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:57:27.626905  780633 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	I0920 19:57:27.629350  780633 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:57:27.631906  780633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:57:27.635058  780633 config.go:182] Loaded profile config "ha-688277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:57:27.635573  780633 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:57:27.667051  780633 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:57:27.667171  780633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:57:27.722992  780633 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2024-09-20 19:57:27.713118036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:57:27.723109  780633 docker.go:318] overlay module found
	I0920 19:57:27.725975  780633 out.go:177] * Using the docker driver based on existing profile
	I0920 19:57:27.728627  780633 start.go:297] selected driver: docker
	I0920 19:57:27.728649  780633 start.go:901] validating driver "docker" against &{Name:ha-688277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-688277 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logvie
wer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:57:27.728880  780633 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:57:27.728992  780633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:57:27.782862  780633 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2024-09-20 19:57:27.77292643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:57:27.783357  780633 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:57:27.783390  780633 cni.go:84] Creating CNI manager for ""
	I0920 19:57:27.783435  780633 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 19:57:27.783498  780633 start.go:340] cluster config:
	{Name:ha-688277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-688277 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvid
ia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:57:27.788135  780633 out.go:177] * Starting "ha-688277" primary control-plane node in "ha-688277" cluster
	I0920 19:57:27.790748  780633 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 19:57:27.793504  780633 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:57:27.796063  780633 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:57:27.796088  780633 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 19:57:27.796110  780633 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0920 19:57:27.796119  780633 cache.go:56] Caching tarball of preloaded images
	I0920 19:57:27.796209  780633 preload.go:172] Found /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0920 19:57:27.796219  780633 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 19:57:27.796356  780633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/config.json ...
	W0920 19:57:27.814039  780633 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 is of wrong architecture
	I0920 19:57:27.814057  780633 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:57:27.814146  780633 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:57:27.814165  780633 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 19:57:27.814170  780633 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 19:57:27.814178  780633 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 19:57:27.814184  780633 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 19:57:27.815673  780633 image.go:273] response: 
	I0920 19:57:27.935984  780633 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 19:57:27.936038  780633 cache.go:194] Successfully downloaded all kic artifacts
	I0920 19:57:27.936087  780633 start.go:360] acquireMachinesLock for ha-688277: {Name:mkc5547b2d7f15480341549174d7fa3994c1f088 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:57:27.936190  780633 start.go:364] duration metric: took 70.481µs to acquireMachinesLock for "ha-688277"
	I0920 19:57:27.936215  780633 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:57:27.936227  780633 fix.go:54] fixHost starting: 
	I0920 19:57:27.936529  780633 cli_runner.go:164] Run: docker container inspect ha-688277 --format={{.State.Status}}
	I0920 19:57:27.953815  780633 fix.go:112] recreateIfNeeded on ha-688277: state=Stopped err=<nil>
	W0920 19:57:27.953865  780633 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:57:27.957097  780633 out.go:177] * Restarting existing docker container for "ha-688277" ...
	I0920 19:57:27.959862  780633 cli_runner.go:164] Run: docker start ha-688277
	I0920 19:57:28.257009  780633 cli_runner.go:164] Run: docker container inspect ha-688277 --format={{.State.Status}}
	I0920 19:57:28.280740  780633 kic.go:430] container "ha-688277" state is running.
	I0920 19:57:28.281295  780633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-688277
	I0920 19:57:28.305027  780633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/config.json ...
	I0920 19:57:28.305282  780633 machine.go:93] provisionDockerMachine start ...
	I0920 19:57:28.305346  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277
	I0920 19:57:28.325392  780633 main.go:141] libmachine: Using SSH client type: native
	I0920 19:57:28.325649  780633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0920 19:57:28.325660  780633 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:57:28.326366  780633 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48486->127.0.0.1:32828: read: connection reset by peer
	I0920 19:57:31.480155  780633 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-688277
	
	I0920 19:57:31.480195  780633 ubuntu.go:169] provisioning hostname "ha-688277"
	I0920 19:57:31.480263  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277
	I0920 19:57:31.503985  780633 main.go:141] libmachine: Using SSH client type: native
	I0920 19:57:31.504237  780633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0920 19:57:31.504254  780633 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-688277 && echo "ha-688277" | sudo tee /etc/hostname
	I0920 19:57:31.661507  780633 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-688277
	
	I0920 19:57:31.661690  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277
	I0920 19:57:31.684817  780633 main.go:141] libmachine: Using SSH client type: native
	I0920 19:57:31.685198  780633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0920 19:57:31.685224  780633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-688277' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-688277/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-688277' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:57:31.833048  780633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:57:31.833139  780633 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-712952/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-712952/.minikube}
	I0920 19:57:31.833187  780633 ubuntu.go:177] setting up certificates
	I0920 19:57:31.833212  780633 provision.go:84] configureAuth start
	I0920 19:57:31.833313  780633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-688277
	I0920 19:57:31.851285  780633 provision.go:143] copyHostCerts
	I0920 19:57:31.851477  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem
	I0920 19:57:31.851550  780633 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem, removing ...
	I0920 19:57:31.851557  780633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem
	I0920 19:57:31.851635  780633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem (1082 bytes)
	I0920 19:57:31.851740  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem
	I0920 19:57:31.851762  780633 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem, removing ...
	I0920 19:57:31.851766  780633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem
	I0920 19:57:31.852106  780633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem (1123 bytes)
	I0920 19:57:31.852166  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem
	I0920 19:57:31.852194  780633 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem, removing ...
	I0920 19:57:31.852198  780633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem
	I0920 19:57:31.852224  780633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem (1675 bytes)
	I0920 19:57:31.852290  780633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem org=jenkins.ha-688277 san=[127.0.0.1 192.168.49.2 ha-688277 localhost minikube]
	I0920 19:57:32.003961  780633 provision.go:177] copyRemoteCerts
	I0920 19:57:32.004048  780633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:57:32.004105  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277
	I0920 19:57:32.024368  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277/id_rsa Username:docker}
	I0920 19:57:32.125778  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 19:57:32.125851  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 19:57:32.151982  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 19:57:32.152052  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0920 19:57:32.177490  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 19:57:32.177574  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 19:57:32.203377  780633 provision.go:87] duration metric: took 370.131983ms to configureAuth
	I0920 19:57:32.203402  780633 ubuntu.go:193] setting minikube options for container-runtime
	I0920 19:57:32.203643  780633 config.go:182] Loaded profile config "ha-688277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:57:32.203748  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277
	I0920 19:57:32.223312  780633 main.go:141] libmachine: Using SSH client type: native
	I0920 19:57:32.223564  780633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0920 19:57:32.223587  780633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:57:32.729484  780633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:57:32.729558  780633 machine.go:96] duration metric: took 4.424254533s to provisionDockerMachine
	I0920 19:57:32.729583  780633 start.go:293] postStartSetup for "ha-688277" (driver="docker")
	I0920 19:57:32.729607  780633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:57:32.729732  780633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:57:32.729810  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277
	I0920 19:57:32.752283  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277/id_rsa Username:docker}
	I0920 19:57:32.858276  780633 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:57:32.861433  780633 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 19:57:32.861471  780633 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 19:57:32.861483  780633 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 19:57:32.861489  780633 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 19:57:32.861500  780633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-712952/.minikube/addons for local assets ...
	I0920 19:57:32.861567  780633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-712952/.minikube/files for local assets ...
	I0920 19:57:32.861659  780633 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem -> 7197342.pem in /etc/ssl/certs
	I0920 19:57:32.861670  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem -> /etc/ssl/certs/7197342.pem
	I0920 19:57:32.861771  780633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:57:32.870565  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem --> /etc/ssl/certs/7197342.pem (1708 bytes)
	I0920 19:57:32.895096  780633 start.go:296] duration metric: took 165.484247ms for postStartSetup
	I0920 19:57:32.895206  780633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:57:32.895276  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277
	I0920 19:57:32.915879  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277/id_rsa Username:docker}
	I0920 19:57:33.017086  780633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 19:57:33.022185  780633 fix.go:56] duration metric: took 5.085948739s for fixHost
	I0920 19:57:33.022215  780633 start.go:83] releasing machines lock for "ha-688277", held for 5.086011409s
	I0920 19:57:33.022304  780633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-688277
	I0920 19:57:33.041891  780633 ssh_runner.go:195] Run: cat /version.json
	I0920 19:57:33.041946  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277
	I0920 19:57:33.041954  780633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:57:33.042019  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277
	I0920 19:57:33.066555  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277/id_rsa Username:docker}
	I0920 19:57:33.071666  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277/id_rsa Username:docker}
	I0920 19:57:33.297342  780633 ssh_runner.go:195] Run: systemctl --version
	I0920 19:57:33.302020  780633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:57:33.447380  780633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 19:57:33.451776  780633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:57:33.460981  780633 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0920 19:57:33.461067  780633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:57:33.470708  780633 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 19:57:33.470785  780633 start.go:495] detecting cgroup driver to use...
	I0920 19:57:33.470826  780633 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 19:57:33.470899  780633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:57:33.484069  780633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:57:33.496006  780633 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:57:33.496100  780633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:57:33.509601  780633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:57:33.521922  780633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:57:33.612307  780633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:57:33.694039  780633 docker.go:233] disabling docker service ...
	I0920 19:57:33.694152  780633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:57:33.706761  780633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:57:33.718488  780633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:57:33.802454  780633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:57:33.884404  780633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:57:33.896649  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:57:33.916388  780633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:57:33.916467  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:33.928211  780633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:57:33.928373  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:33.939915  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:33.950775  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:33.961630  780633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:57:33.971337  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:33.982485  780633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:33.992367  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:34.002493  780633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:57:34.019596  780633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:57:34.029091  780633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:57:34.120119  780633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:57:34.241386  780633 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:57:34.241491  780633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:57:34.245387  780633 start.go:563] Will wait 60s for crictl version
	I0920 19:57:34.245509  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:57:34.249207  780633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:57:34.291384  780633 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0920 19:57:34.291507  780633 ssh_runner.go:195] Run: crio --version
	I0920 19:57:34.330530  780633 ssh_runner.go:195] Run: crio --version
	I0920 19:57:34.375628  780633 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0920 19:57:34.378525  780633 cli_runner.go:164] Run: docker network inspect ha-688277 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:57:34.394691  780633 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 19:57:34.398345  780633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:57:34.409996  780633 kubeadm.go:883] updating cluster {Name:ha-688277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-688277 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false met
allb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:57:34.410148  780633 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:57:34.410209  780633 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:57:34.458288  780633 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:57:34.458314  780633 crio.go:433] Images already preloaded, skipping extraction
	I0920 19:57:34.458380  780633 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:57:34.500027  780633 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 19:57:34.500064  780633 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:57:34.500075  780633 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0920 19:57:34.500180  780633 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-688277 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-688277 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:57:34.500267  780633 ssh_runner.go:195] Run: crio config
	I0920 19:57:34.552593  780633 cni.go:84] Creating CNI manager for ""
	I0920 19:57:34.552621  780633 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 19:57:34.552632  780633 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:57:34.552655  780633 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-688277 NodeName:ha-688277 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:57:34.552839  780633 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-688277"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:57:34.552861  780633 kube-vip.go:115] generating kube-vip config ...
	I0920 19:57:34.552917  780633 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0920 19:57:34.566144  780633 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 19:57:34.566263  780633 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 19:57:34.566329  780633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:57:34.575029  780633 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:57:34.575151  780633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 19:57:34.584009  780633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0920 19:57:34.602856  780633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:57:34.621496  780633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0920 19:57:34.639633  780633 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 19:57:34.658652  780633 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0920 19:57:34.662231  780633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:57:34.673786  780633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:57:34.762661  780633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:57:34.776824  780633 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277 for IP: 192.168.49.2
	I0920 19:57:34.776850  780633 certs.go:194] generating shared ca certs ...
	I0920 19:57:34.776878  780633 certs.go:226] acquiring lock for ca certs: {Name:mk7d5a5d7b3ae5cfc59d92978e91627e15e3360b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:57:34.777082  780633 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key
	I0920 19:57:34.777157  780633 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key
	I0920 19:57:34.777171  780633 certs.go:256] generating profile certs ...
	I0920 19:57:34.777281  780633 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/client.key
	I0920 19:57:34.777331  780633 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.key.c11ef79d
	I0920 19:57:34.777368  780633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.crt.c11ef79d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0920 19:57:35.095325  780633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.crt.c11ef79d ...
	I0920 19:57:35.095407  780633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.crt.c11ef79d: {Name:mk33de4fdc8682573ead8ad1d001c3858c88529b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:57:35.095670  780633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.key.c11ef79d ...
	I0920 19:57:35.095719  780633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.key.c11ef79d: {Name:mk95b2ffe296543c122514c39890effe5ec2f135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:57:35.095873  780633 certs.go:381] copying /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.crt.c11ef79d -> /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.crt
	I0920 19:57:35.096101  780633 certs.go:385] copying /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.key.c11ef79d -> /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.key
	I0920 19:57:35.096362  780633 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/proxy-client.key
	I0920 19:57:35.096432  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 19:57:35.096493  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 19:57:35.096527  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 19:57:35.096563  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 19:57:35.096608  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 19:57:35.096669  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 19:57:35.096753  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 19:57:35.096797  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 19:57:35.096880  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/719734.pem (1338 bytes)
	W0920 19:57:35.096977  780633 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-712952/.minikube/certs/719734_empty.pem, impossibly tiny 0 bytes
	I0920 19:57:35.097007  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 19:57:35.097070  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem (1082 bytes)
	I0920 19:57:35.097144  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:57:35.097204  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem (1675 bytes)
	I0920 19:57:35.097281  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem (1708 bytes)
	I0920 19:57:35.097336  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:57:35.097378  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/719734.pem -> /usr/share/ca-certificates/719734.pem
	I0920 19:57:35.097412  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem -> /usr/share/ca-certificates/7197342.pem
	I0920 19:57:35.098062  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:57:35.128707  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 19:57:35.155246  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:57:35.181943  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:57:35.210541  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 19:57:35.236716  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:57:35.262423  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:57:35.287705  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:57:35.314452  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:57:35.339678  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/certs/719734.pem --> /usr/share/ca-certificates/719734.pem (1338 bytes)
	I0920 19:57:35.366486  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem --> /usr/share/ca-certificates/7197342.pem (1708 bytes)
	I0920 19:57:35.392156  780633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:57:35.411757  780633 ssh_runner.go:195] Run: openssl version
	I0920 19:57:35.418039  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:57:35.428920  780633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:57:35.433218  780633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:57:35.433298  780633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:57:35.440902  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:57:35.450636  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/719734.pem && ln -fs /usr/share/ca-certificates/719734.pem /etc/ssl/certs/719734.pem"
	I0920 19:57:35.460984  780633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/719734.pem
	I0920 19:57:35.464824  780633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 19:45 /usr/share/ca-certificates/719734.pem
	I0920 19:57:35.464908  780633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/719734.pem
	I0920 19:57:35.472403  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/719734.pem /etc/ssl/certs/51391683.0"
	I0920 19:57:35.481835  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7197342.pem && ln -fs /usr/share/ca-certificates/7197342.pem /etc/ssl/certs/7197342.pem"
	I0920 19:57:35.491842  780633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7197342.pem
	I0920 19:57:35.495783  780633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 19:45 /usr/share/ca-certificates/7197342.pem
	I0920 19:57:35.495908  780633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7197342.pem
	I0920 19:57:35.504103  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7197342.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:57:35.516165  780633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:57:35.520116  780633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:57:35.528205  780633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:57:35.536508  780633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:57:35.544923  780633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:57:35.553361  780633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:57:35.561378  780633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:57:35.569222  780633 kubeadm.go:392] StartCluster: {Name:ha-688277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-688277 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metall
b:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:57:35.569359  780633 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 19:57:35.569430  780633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:57:35.618907  780633 cri.go:89] found id: ""
	I0920 19:57:35.618990  780633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:57:35.629765  780633 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 19:57:35.629790  780633 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 19:57:35.629874  780633 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 19:57:35.640098  780633 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 19:57:35.640546  780633 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-688277" does not appear in /home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:57:35.640720  780633 kubeconfig.go:62] /home/jenkins/minikube-integration/19678-712952/kubeconfig needs updating (will repair): [kubeconfig missing "ha-688277" cluster setting kubeconfig missing "ha-688277" context setting]
	I0920 19:57:35.641065  780633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/kubeconfig: {Name:mk7d8753aacb2df257bd5191c7b120c25eed71dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:57:35.641547  780633 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:57:35.641978  780633 kapi.go:59] client config for ha-688277: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/client.key", CAFile:"/home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a16ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 19:57:35.642720  780633 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 19:57:35.642873  780633 cert_rotation.go:140] Starting client certificate rotation controller
	I0920 19:57:35.654747  780633 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0920 19:57:35.654776  780633 kubeadm.go:597] duration metric: took 24.978654ms to restartPrimaryControlPlane
	I0920 19:57:35.654796  780633 kubeadm.go:394] duration metric: took 85.584495ms to StartCluster
	I0920 19:57:35.654862  780633 settings.go:142] acquiring lock: {Name:mk4ddd924228bcf0d3a34d801111d62307b61b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:57:35.655057  780633 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:57:35.655883  780633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-712952/kubeconfig: {Name:mk7d8753aacb2df257bd5191c7b120c25eed71dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:57:35.656248  780633 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:57:35.656281  780633 start.go:241] waiting for startup goroutines ...
	I0920 19:57:35.656323  780633 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 19:57:35.657038  780633 config.go:182] Loaded profile config "ha-688277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:57:35.662327  780633 out.go:177] * Enabled addons: 
	I0920 19:57:35.665050  780633 addons.go:510] duration metric: took 8.729732ms for enable addons: enabled=[]
	I0920 19:57:35.665124  780633 start.go:246] waiting for cluster config update ...
	I0920 19:57:35.665152  780633 start.go:255] writing updated cluster config ...
	I0920 19:57:35.668592  780633 out.go:201] 
	I0920 19:57:35.671450  780633 config.go:182] Loaded profile config "ha-688277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:57:35.671640  780633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/config.json ...
	I0920 19:57:35.674773  780633 out.go:177] * Starting "ha-688277-m02" control-plane node in "ha-688277" cluster
	I0920 19:57:35.677435  780633 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 19:57:35.680106  780633 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:57:35.682873  780633 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:57:35.682921  780633 cache.go:56] Caching tarball of preloaded images
	I0920 19:57:35.682929  780633 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 19:57:35.683127  780633 preload.go:172] Found /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0920 19:57:35.683170  780633 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 19:57:35.683338  780633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/config.json ...
	W0920 19:57:35.706188  780633 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 is of wrong architecture
	I0920 19:57:35.706209  780633 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:57:35.706302  780633 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:57:35.706325  780633 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 19:57:35.706335  780633 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 19:57:35.706354  780633 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 19:57:35.706365  780633 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 19:57:35.707606  780633 image.go:273] response: 
	I0920 19:57:35.829910  780633 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 19:57:35.829952  780633 cache.go:194] Successfully downloaded all kic artifacts
	I0920 19:57:35.829982  780633 start.go:360] acquireMachinesLock for ha-688277-m02: {Name:mk198d424cb52f076a963a4132de990b5a257bf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:57:35.830049  780633 start.go:364] duration metric: took 43.019µs to acquireMachinesLock for "ha-688277-m02"
	I0920 19:57:35.830075  780633 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:57:35.830085  780633 fix.go:54] fixHost starting: m02
	I0920 19:57:35.830376  780633 cli_runner.go:164] Run: docker container inspect ha-688277-m02 --format={{.State.Status}}
	I0920 19:57:35.846686  780633 fix.go:112] recreateIfNeeded on ha-688277-m02: state=Stopped err=<nil>
	W0920 19:57:35.846715  780633 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:57:35.849713  780633 out.go:177] * Restarting existing docker container for "ha-688277-m02" ...
	I0920 19:57:35.852335  780633 cli_runner.go:164] Run: docker start ha-688277-m02
	I0920 19:57:36.165499  780633 cli_runner.go:164] Run: docker container inspect ha-688277-m02 --format={{.State.Status}}
	I0920 19:57:36.187665  780633 kic.go:430] container "ha-688277-m02" state is running.
	I0920 19:57:36.188067  780633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-688277-m02
	I0920 19:57:36.211066  780633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/config.json ...
	I0920 19:57:36.211333  780633 machine.go:93] provisionDockerMachine start ...
	I0920 19:57:36.211414  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m02
	I0920 19:57:36.233347  780633 main.go:141] libmachine: Using SSH client type: native
	I0920 19:57:36.233582  780633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0920 19:57:36.233591  780633 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:57:36.234286  780633 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0920 19:57:39.404731  780633 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-688277-m02
	
	I0920 19:57:39.404795  780633 ubuntu.go:169] provisioning hostname "ha-688277-m02"
	I0920 19:57:39.404884  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m02
	I0920 19:57:39.429956  780633 main.go:141] libmachine: Using SSH client type: native
	I0920 19:57:39.430205  780633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0920 19:57:39.430217  780633 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-688277-m02 && echo "ha-688277-m02" | sudo tee /etc/hostname
	I0920 19:57:39.618598  780633 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-688277-m02
	
	I0920 19:57:39.618766  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m02
	I0920 19:57:39.662650  780633 main.go:141] libmachine: Using SSH client type: native
	I0920 19:57:39.662939  780633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0920 19:57:39.662967  780633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-688277-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-688277-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-688277-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:57:39.838815  780633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:57:39.838864  780633 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-712952/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-712952/.minikube}
	I0920 19:57:39.838886  780633 ubuntu.go:177] setting up certificates
	I0920 19:57:39.838898  780633 provision.go:84] configureAuth start
	I0920 19:57:39.838965  780633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-688277-m02
	I0920 19:57:39.876758  780633 provision.go:143] copyHostCerts
	I0920 19:57:39.876811  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem
	I0920 19:57:39.876851  780633 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem, removing ...
	I0920 19:57:39.876859  780633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem
	I0920 19:57:39.876992  780633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem (1082 bytes)
	I0920 19:57:39.877101  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem
	I0920 19:57:39.877155  780633 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem, removing ...
	I0920 19:57:39.877160  780633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem
	I0920 19:57:39.877247  780633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem (1123 bytes)
	I0920 19:57:39.877330  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem
	I0920 19:57:39.877350  780633 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem, removing ...
	I0920 19:57:39.877354  780633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem
	I0920 19:57:39.877424  780633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem (1675 bytes)
	I0920 19:57:39.877504  780633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem org=jenkins.ha-688277-m02 san=[127.0.0.1 192.168.49.3 ha-688277-m02 localhost minikube]
	I0920 19:57:40.025564  780633 provision.go:177] copyRemoteCerts
	I0920 19:57:40.025704  780633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:57:40.025778  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m02
	I0920 19:57:40.050293  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277-m02/id_rsa Username:docker}
	I0920 19:57:40.166839  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 19:57:40.166904  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 19:57:40.198768  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 19:57:40.198838  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 19:57:40.228238  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 19:57:40.228358  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 19:57:40.271477  780633 provision.go:87] duration metric: took 432.557752ms to configureAuth
	I0920 19:57:40.271568  780633 ubuntu.go:193] setting minikube options for container-runtime
	I0920 19:57:40.271901  780633 config.go:182] Loaded profile config "ha-688277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:57:40.272100  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m02
	I0920 19:57:40.298954  780633 main.go:141] libmachine: Using SSH client type: native
	I0920 19:57:40.299401  780633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0920 19:57:40.299429  780633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:57:40.710103  780633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:57:40.710127  780633 machine.go:96] duration metric: took 4.498780328s to provisionDockerMachine
	I0920 19:57:40.710139  780633 start.go:293] postStartSetup for "ha-688277-m02" (driver="docker")
	I0920 19:57:40.710150  780633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:57:40.710214  780633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:57:40.710255  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m02
	I0920 19:57:40.727194  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277-m02/id_rsa Username:docker}
	I0920 19:57:40.830071  780633 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:57:40.833465  780633 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 19:57:40.833505  780633 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 19:57:40.833518  780633 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 19:57:40.833525  780633 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 19:57:40.833540  780633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-712952/.minikube/addons for local assets ...
	I0920 19:57:40.833601  780633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-712952/.minikube/files for local assets ...
	I0920 19:57:40.833681  780633 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem -> 7197342.pem in /etc/ssl/certs
	I0920 19:57:40.833693  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem -> /etc/ssl/certs/7197342.pem
	I0920 19:57:40.833799  780633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:57:40.842732  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem --> /etc/ssl/certs/7197342.pem (1708 bytes)
	I0920 19:57:40.870686  780633 start.go:296] duration metric: took 160.528996ms for postStartSetup
	I0920 19:57:40.870785  780633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:57:40.870840  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m02
	I0920 19:57:40.888727  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277-m02/id_rsa Username:docker}
	I0920 19:57:40.985872  780633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 19:57:40.990782  780633 fix.go:56] duration metric: took 5.160689701s for fixHost
	I0920 19:57:40.990809  780633 start.go:83] releasing machines lock for "ha-688277-m02", held for 5.160746242s
	I0920 19:57:40.990884  780633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-688277-m02
	I0920 19:57:41.015382  780633 out.go:177] * Found network options:
	I0920 19:57:41.018647  780633 out.go:177]   - NO_PROXY=192.168.49.2
	W0920 19:57:41.021438  780633 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 19:57:41.021487  780633 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 19:57:41.021568  780633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:57:41.021633  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m02
	I0920 19:57:41.021926  780633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:57:41.021982  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m02
	I0920 19:57:41.046760  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277-m02/id_rsa Username:docker}
	I0920 19:57:41.054308  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277-m02/id_rsa Username:docker}
	I0920 19:57:41.303922  780633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 19:57:41.320058  780633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:57:41.330631  780633 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0920 19:57:41.330716  780633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:57:41.371978  780633 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 19:57:41.372006  780633 start.go:495] detecting cgroup driver to use...
	I0920 19:57:41.372062  780633 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 19:57:41.372144  780633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:57:41.452386  780633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:57:41.521266  780633 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:57:41.521414  780633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:57:41.610117  780633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:57:41.733100  780633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:57:42.149835  780633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:57:42.558014  780633 docker.go:233] disabling docker service ...
	I0920 19:57:42.558092  780633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:57:42.635971  780633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:57:42.709274  780633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:57:43.058352  780633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:57:43.370229  780633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:57:43.434736  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:57:43.500631  780633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:57:43.500838  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:43.565365  780633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:57:43.565448  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:43.618233  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:43.672136  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:43.728059  780633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:57:43.755751  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:43.775521  780633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:43.833955  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:57:43.860123  780633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:57:43.880917  780633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:57:43.909959  780633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:57:44.213347  780633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:57:44.705872  780633 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:57:44.705955  780633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:57:44.716198  780633 start.go:563] Will wait 60s for crictl version
	I0920 19:57:44.716281  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:57:44.725092  780633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:57:44.788914  780633 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0920 19:57:44.789031  780633 ssh_runner.go:195] Run: crio --version
	I0920 19:57:44.871874  780633 ssh_runner.go:195] Run: crio --version
	I0920 19:57:44.966383  780633 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0920 19:57:44.969054  780633 out.go:177]   - env NO_PROXY=192.168.49.2
	I0920 19:57:44.971953  780633 cli_runner.go:164] Run: docker network inspect ha-688277 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:57:44.994356  780633 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 19:57:44.998642  780633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:57:45.015032  780633 mustload.go:65] Loading cluster: ha-688277
	I0920 19:57:45.015297  780633 config.go:182] Loaded profile config "ha-688277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:57:45.015606  780633 cli_runner.go:164] Run: docker container inspect ha-688277 --format={{.State.Status}}
	I0920 19:57:45.044142  780633 host.go:66] Checking if "ha-688277" exists ...
	I0920 19:57:45.044478  780633 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277 for IP: 192.168.49.3
	I0920 19:57:45.044494  780633 certs.go:194] generating shared ca certs ...
	I0920 19:57:45.044514  780633 certs.go:226] acquiring lock for ca certs: {Name:mk7d5a5d7b3ae5cfc59d92978e91627e15e3360b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:57:45.044637  780633 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key
	I0920 19:57:45.044708  780633 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key
	I0920 19:57:45.044721  780633 certs.go:256] generating profile certs ...
	I0920 19:57:45.044823  780633 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/client.key
	I0920 19:57:45.044912  780633 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.key.a86b8dc0
	I0920 19:57:45.044966  780633 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/proxy-client.key
	I0920 19:57:45.044981  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 19:57:45.045001  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 19:57:45.045019  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 19:57:45.045033  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 19:57:45.045048  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 19:57:45.045061  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 19:57:45.045078  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 19:57:45.045090  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 19:57:45.045154  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/719734.pem (1338 bytes)
	W0920 19:57:45.045189  780633 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-712952/.minikube/certs/719734_empty.pem, impossibly tiny 0 bytes
	I0920 19:57:45.045204  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 19:57:45.045230  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem (1082 bytes)
	I0920 19:57:45.045259  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:57:45.045302  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem (1675 bytes)
	I0920 19:57:45.045361  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem (1708 bytes)
	I0920 19:57:45.045414  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:57:45.045439  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/719734.pem -> /usr/share/ca-certificates/719734.pem
	I0920 19:57:45.045453  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem -> /usr/share/ca-certificates/7197342.pem
	I0920 19:57:45.045535  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277
	I0920 19:57:45.098305  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277/id_rsa Username:docker}
	I0920 19:57:45.300420  780633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 19:57:45.330529  780633 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 19:57:45.402056  780633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 19:57:45.413870  780633 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0920 19:57:45.458441  780633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 19:57:45.471887  780633 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 19:57:45.508588  780633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 19:57:45.520594  780633 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0920 19:57:45.548321  780633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 19:57:45.560919  780633 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 19:57:45.592277  780633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 19:57:45.606540  780633 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0920 19:57:45.637741  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:57:45.686585  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 19:57:45.732911  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:57:45.776574  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:57:45.838413  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 19:57:45.869629  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 19:57:45.899418  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:57:45.927464  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:57:45.958248  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:57:45.987155  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/certs/719734.pem --> /usr/share/ca-certificates/719734.pem (1338 bytes)
	I0920 19:57:46.021031  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem --> /usr/share/ca-certificates/7197342.pem (1708 bytes)
	I0920 19:57:46.056291  780633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 19:57:46.087181  780633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0920 19:57:46.117140  780633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 19:57:46.146954  780633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0920 19:57:46.175123  780633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 19:57:46.217299  780633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0920 19:57:46.251132  780633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 19:57:46.278883  780633 ssh_runner.go:195] Run: openssl version
	I0920 19:57:46.285449  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:57:46.295815  780633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:57:46.300467  780633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:57:46.300542  780633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:57:46.308339  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:57:46.317964  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/719734.pem && ln -fs /usr/share/ca-certificates/719734.pem /etc/ssl/certs/719734.pem"
	I0920 19:57:46.328425  780633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/719734.pem
	I0920 19:57:46.332841  780633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 19:45 /usr/share/ca-certificates/719734.pem
	I0920 19:57:46.332907  780633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/719734.pem
	I0920 19:57:46.344853  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/719734.pem /etc/ssl/certs/51391683.0"
	I0920 19:57:46.365644  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7197342.pem && ln -fs /usr/share/ca-certificates/7197342.pem /etc/ssl/certs/7197342.pem"
	I0920 19:57:46.382364  780633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7197342.pem
	I0920 19:57:46.386941  780633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 19:45 /usr/share/ca-certificates/7197342.pem
	I0920 19:57:46.387013  780633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7197342.pem
	I0920 19:57:46.396713  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7197342.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:57:46.406934  780633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:57:46.411501  780633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 19:57:46.419123  780633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 19:57:46.427288  780633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 19:57:46.434844  780633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 19:57:46.443065  780633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 19:57:46.450569  780633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 19:57:46.458519  780633 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 crio true true} ...
	I0920 19:57:46.458624  780633 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-688277-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-688277 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:57:46.458655  780633 kube-vip.go:115] generating kube-vip config ...
	I0920 19:57:46.458711  780633 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0920 19:57:46.474141  780633 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 19:57:46.474208  780633 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 19:57:46.474292  780633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:57:46.486156  780633 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:57:46.486229  780633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 19:57:46.494813  780633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0920 19:57:46.513793  780633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:57:46.543262  780633 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 19:57:46.577179  780633 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0920 19:57:46.581297  780633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:57:46.598235  780633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:57:46.793442  780633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:57:46.810402  780633 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 19:57:46.810927  780633 config.go:182] Loaded profile config "ha-688277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:57:46.815750  780633 out.go:177] * Verifying Kubernetes components...
	I0920 19:57:46.818242  780633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:57:46.971336  780633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:57:46.988355  780633 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:57:46.988666  780633 kapi.go:59] client config for ha-688277: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/client.key", CAFile:"/home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a16ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 19:57:46.988825  780633 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0920 19:57:46.989093  780633 node_ready.go:35] waiting up to 6m0s for node "ha-688277-m02" to be "Ready" ...
	I0920 19:57:46.989188  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:57:46.989194  780633 round_trippers.go:469] Request Headers:
	I0920 19:57:46.989202  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:57:46.989207  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:57:58.707112  780633 round_trippers.go:574] Response Status: 500 Internal Server Error in 11717 milliseconds
	I0920 19:57:58.707345  780633 node_ready.go:53] error getting node "ha-688277-m02": etcdserver: request timed out
	I0920 19:57:58.707404  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:57:58.707410  780633 round_trippers.go:469] Request Headers:
	I0920 19:57:58.707418  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:57:58.707422  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:05.827647  780633 round_trippers.go:574] Response Status: 500 Internal Server Error in 7120 milliseconds
	I0920 19:58:05.828910  780633 node_ready.go:53] error getting node "ha-688277-m02": etcdserver: leader changed
	I0920 19:58:05.829027  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:58:05.829034  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:05.829049  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:05.829053  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:05.882185  780633 round_trippers.go:574] Response Status: 200 OK in 53 milliseconds
	I0920 19:58:05.884039  780633 node_ready.go:49] node "ha-688277-m02" has status "Ready":"True"
	I0920 19:58:05.884067  780633 node_ready.go:38] duration metric: took 18.894946749s for node "ha-688277-m02" to be "Ready" ...
	I0920 19:58:05.884077  780633 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:58:05.884138  780633 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 19:58:05.884149  780633 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 19:58:05.884258  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0920 19:58:05.884264  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:05.884271  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:05.884278  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:05.888523  780633 round_trippers.go:574] Response Status: 429 Too Many Requests in 4 milliseconds
	I0920 19:58:06.889832  780633 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0920 19:58:06.889885  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0920 19:58:06.889890  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:06.889899  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:06.889906  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:06.928599  780633 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0920 19:58:06.941131  780633 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:06.941323  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:58:06.941351  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:06.941373  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:06.941392  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:06.948254  780633 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 19:58:06.949043  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:58:06.949059  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:06.949078  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:06.949082  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:06.952485  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:06.953049  780633 pod_ready.go:93] pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace has status "Ready":"True"
	I0920 19:58:06.953065  780633 pod_ready.go:82] duration metric: took 11.852676ms for pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:06.953076  780633 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-srdh5" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:06.953147  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-srdh5
	I0920 19:58:06.953151  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:06.953159  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:06.953164  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:06.956624  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:06.957485  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:58:06.957522  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:06.957559  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:06.957579  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:06.960960  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:06.961663  780633 pod_ready.go:93] pod "coredns-7c65d6cfc9-srdh5" in "kube-system" namespace has status "Ready":"True"
	I0920 19:58:06.961717  780633 pod_ready.go:82] duration metric: took 8.621443ms for pod "coredns-7c65d6cfc9-srdh5" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:06.961743  780633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-688277" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:06.961840  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-688277
	I0920 19:58:06.961876  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:06.961898  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:06.961917  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:06.964810  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:58:06.965594  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:58:06.965647  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:06.965674  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:06.965693  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:06.969181  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:06.970150  780633 pod_ready.go:93] pod "etcd-ha-688277" in "kube-system" namespace has status "Ready":"True"
	I0920 19:58:06.970207  780633 pod_ready.go:82] duration metric: took 8.443059ms for pod "etcd-ha-688277" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:06.970233  780633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:06.970333  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-688277-m02
	I0920 19:58:06.970366  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:06.970390  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:06.970411  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:06.973598  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:06.974383  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:58:06.974424  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:06.974462  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:06.974488  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:06.977567  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:06.978184  780633 pod_ready.go:93] pod "etcd-ha-688277-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 19:58:06.978237  780633 pod_ready.go:82] duration metric: took 7.982343ms for pod "etcd-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:06.978263  780633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-688277-m03" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:06.978385  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-688277-m03
	I0920 19:58:06.978409  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:06.978442  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:06.978466  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:06.981494  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:58:07.090500  780633 request.go:632] Waited for 108.257446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m03
	I0920 19:58:07.090673  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m03
	I0920 19:58:07.090711  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:07.090737  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:07.090757  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:07.105530  780633 round_trippers.go:574] Response Status: 404 Not Found in 14 milliseconds
	I0920 19:58:07.105733  780633 pod_ready.go:98] node "ha-688277-m03" hosting pod "etcd-ha-688277-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-688277-m03": nodes "ha-688277-m03" not found
	I0920 19:58:07.105777  780633 pod_ready.go:82] duration metric: took 127.474049ms for pod "etcd-ha-688277-m03" in "kube-system" namespace to be "Ready" ...
	E0920 19:58:07.105805  780633 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-688277-m03" hosting pod "etcd-ha-688277-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-688277-m03": nodes "ha-688277-m03" not found
	I0920 19:58:07.105863  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-688277" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:07.290865  780633 request.go:632] Waited for 184.898885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-688277
	I0920 19:58:07.290988  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-688277
	I0920 19:58:07.291031  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:07.291059  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:07.291083  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:07.295392  780633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 19:58:07.489881  780633 request.go:632] Waited for 193.648309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:58:07.490002  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:58:07.490045  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:07.490078  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:07.490099  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:07.493065  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:58:07.493989  780633 pod_ready.go:93] pod "kube-apiserver-ha-688277" in "kube-system" namespace has status "Ready":"True"
	I0920 19:58:07.494009  780633 pod_ready.go:82] duration metric: took 388.118145ms for pod "kube-apiserver-ha-688277" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:07.494028  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:07.690757  780633 request.go:632] Waited for 196.658327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-688277-m02
	I0920 19:58:07.690889  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-688277-m02
	I0920 19:58:07.690903  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:07.690919  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:07.690930  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:07.695626  780633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 19:58:07.890263  780633 request.go:632] Waited for 193.196224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:58:07.890347  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:58:07.890359  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:07.890414  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:07.890429  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:07.893475  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:07.894433  780633 pod_ready.go:93] pod "kube-apiserver-ha-688277-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 19:58:07.894457  780633 pod_ready.go:82] duration metric: took 400.419517ms for pod "kube-apiserver-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:07.894484  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-688277-m03" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:08.090650  780633 request.go:632] Waited for 196.098052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-688277-m03
	I0920 19:58:08.090724  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-688277-m03
	I0920 19:58:08.090735  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:08.090744  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:08.090762  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:08.093953  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:08.290715  780633 request.go:632] Waited for 195.327837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m03
	I0920 19:58:08.290807  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m03
	I0920 19:58:08.290828  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:08.290839  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:08.290851  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:08.296667  780633 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0920 19:58:08.297184  780633 pod_ready.go:98] node "ha-688277-m03" hosting pod "kube-apiserver-ha-688277-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-688277-m03": nodes "ha-688277-m03" not found
	I0920 19:58:08.297229  780633 pod_ready.go:82] duration metric: took 402.722811ms for pod "kube-apiserver-ha-688277-m03" in "kube-system" namespace to be "Ready" ...
	E0920 19:58:08.297246  780633 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-688277-m03" hosting pod "kube-apiserver-ha-688277-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-688277-m03": nodes "ha-688277-m03" not found
	I0920 19:58:08.297257  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-688277" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:08.490442  780633 request.go:632] Waited for 193.091635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-688277
	I0920 19:58:08.490535  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-688277
	I0920 19:58:08.490576  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:08.490587  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:08.490592  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:08.498428  780633 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 19:58:08.689917  780633 request.go:632] Waited for 190.14883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:58:08.690006  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:58:08.690016  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:08.690055  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:08.690127  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:08.694075  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:08.695198  780633 pod_ready.go:93] pod "kube-controller-manager-ha-688277" in "kube-system" namespace has status "Ready":"True"
	I0920 19:58:08.695224  780633 pod_ready.go:82] duration metric: took 397.952131ms for pod "kube-controller-manager-ha-688277" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:08.695248  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:08.890201  780633 request.go:632] Waited for 194.868311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-688277-m02
	I0920 19:58:08.890370  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-688277-m02
	I0920 19:58:08.890381  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:08.890390  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:08.890398  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:08.894038  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:09.090748  780633 request.go:632] Waited for 195.259399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:58:09.090850  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:58:09.090863  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:09.090873  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:09.090882  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:09.093524  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:58:09.094577  780633 pod_ready.go:93] pod "kube-controller-manager-ha-688277-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 19:58:09.094600  780633 pod_ready.go:82] duration metric: took 399.339105ms for pod "kube-controller-manager-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:09.094616  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-688277-m03" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:09.290543  780633 request.go:632] Waited for 195.807965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-688277-m03
	I0920 19:58:09.290612  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-688277-m03
	I0920 19:58:09.290623  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:09.290632  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:09.290641  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:09.306171  780633 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0920 19:58:09.490837  780633 request.go:632] Waited for 173.272394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m03
	I0920 19:58:09.490911  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m03
	I0920 19:58:09.490922  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:09.490931  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:09.490970  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:09.498032  780633 round_trippers.go:574] Response Status: 404 Not Found in 7 milliseconds
	I0920 19:58:09.498324  780633 pod_ready.go:98] node "ha-688277-m03" hosting pod "kube-controller-manager-ha-688277-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-688277-m03": nodes "ha-688277-m03" not found
	I0920 19:58:09.498359  780633 pod_ready.go:82] duration metric: took 403.730525ms for pod "kube-controller-manager-ha-688277-m03" in "kube-system" namespace to be "Ready" ...
	E0920 19:58:09.498376  780633 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-688277-m03" hosting pod "kube-controller-manager-ha-688277-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-688277-m03": nodes "ha-688277-m03" not found
	I0920 19:58:09.498385  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-596wf" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:09.690600  780633 request.go:632] Waited for 192.112234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-596wf
	I0920 19:58:09.690673  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-596wf
	I0920 19:58:09.690684  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:09.690743  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:09.690755  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:09.698404  780633 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 19:58:09.889886  780633 request.go:632] Waited for 190.197715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:58:09.889957  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:58:09.889968  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:09.889977  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:09.890014  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:09.893803  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:09.894870  780633 pod_ready.go:93] pod "kube-proxy-596wf" in "kube-system" namespace has status "Ready":"True"
	I0920 19:58:09.894895  780633 pod_ready.go:82] duration metric: took 396.492085ms for pod "kube-proxy-596wf" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:09.894910  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7w8r5" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:10.090885  780633 request.go:632] Waited for 195.839595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7w8r5
	I0920 19:58:10.090965  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7w8r5
	I0920 19:58:10.090977  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:10.090986  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:10.090995  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:10.094451  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:10.290001  780633 request.go:632] Waited for 194.226362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m03
	I0920 19:58:10.290104  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m03
	I0920 19:58:10.290117  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:10.290139  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:10.290151  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:10.296024  780633 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0920 19:58:10.296627  780633 pod_ready.go:98] node "ha-688277-m03" hosting pod "kube-proxy-7w8r5" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-688277-m03": nodes "ha-688277-m03" not found
	I0920 19:58:10.296675  780633 pod_ready.go:82] duration metric: took 401.744705ms for pod "kube-proxy-7w8r5" in "kube-system" namespace to be "Ready" ...
	E0920 19:58:10.296743  780633 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-688277-m03" hosting pod "kube-proxy-7w8r5" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-688277-m03": nodes "ha-688277-m03" not found
	I0920 19:58:10.296755  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-czqf2" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:10.490752  780633 request.go:632] Waited for 193.840352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-czqf2
	I0920 19:58:10.490822  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-czqf2
	I0920 19:58:10.490832  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:10.490841  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:10.490845  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:10.501267  780633 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0920 19:58:10.690646  780633 request.go:632] Waited for 188.147823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:58:10.690799  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:58:10.690828  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:10.690859  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:10.690875  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:10.694151  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:10.695255  780633 pod_ready.go:93] pod "kube-proxy-czqf2" in "kube-system" namespace has status "Ready":"True"
	I0920 19:58:10.695326  780633 pod_ready.go:82] duration metric: took 398.54164ms for pod "kube-proxy-czqf2" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:10.695354  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l769r" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:10.890281  780633 request.go:632] Waited for 194.833397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l769r
	I0920 19:58:10.890405  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l769r
	I0920 19:58:10.890441  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:10.890472  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:10.890495  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:10.893531  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:58:11.090669  780633 request.go:632] Waited for 196.107167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:58:11.090784  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:58:11.090820  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:11.090850  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:11.090868  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:11.094639  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:11.095408  780633 pod_ready.go:93] pod "kube-proxy-l769r" in "kube-system" namespace has status "Ready":"True"
	I0920 19:58:11.095482  780633 pod_ready.go:82] duration metric: took 400.104568ms for pod "kube-proxy-l769r" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:11.095510  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-688277" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:11.290323  780633 request.go:632] Waited for 194.716724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-688277
	I0920 19:58:11.290436  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-688277
	I0920 19:58:11.290458  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:11.290528  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:11.290548  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:11.294130  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:11.489966  780633 request.go:632] Waited for 194.215286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:58:11.490031  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:58:11.490040  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:11.490049  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:11.490055  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:11.492878  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:58:11.493522  780633 pod_ready.go:93] pod "kube-scheduler-ha-688277" in "kube-system" namespace has status "Ready":"True"
	I0920 19:58:11.493543  780633 pod_ready.go:82] duration metric: took 398.012716ms for pod "kube-scheduler-ha-688277" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:11.493555  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:11.690340  780633 request.go:632] Waited for 196.619722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-688277-m02
	I0920 19:58:11.690416  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-688277-m02
	I0920 19:58:11.690423  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:11.690432  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:11.690437  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:11.693371  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:58:11.890482  780633 request.go:632] Waited for 196.295635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:58:11.890550  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:58:11.890557  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:11.890566  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:11.890570  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:11.894036  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:58:11.894717  780633 pod_ready.go:93] pod "kube-scheduler-ha-688277-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 19:58:11.894743  780633 pod_ready.go:82] duration metric: took 401.160881ms for pod "kube-scheduler-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:11.894755  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-688277-m03" in "kube-system" namespace to be "Ready" ...
	I0920 19:58:12.090702  780633 request.go:632] Waited for 195.87504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-688277-m03
	I0920 19:58:12.090766  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-688277-m03
	I0920 19:58:12.090772  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:12.090780  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:12.090785  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:12.098404  780633 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 19:58:12.290682  780633 request.go:632] Waited for 191.286504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m03
	I0920 19:58:12.290743  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m03
	I0920 19:58:12.290750  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:12.290757  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:12.290761  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:12.297236  780633 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0920 19:58:12.297356  780633 pod_ready.go:98] node "ha-688277-m03" hosting pod "kube-scheduler-ha-688277-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-688277-m03": nodes "ha-688277-m03" not found
	I0920 19:58:12.297372  780633 pod_ready.go:82] duration metric: took 402.608185ms for pod "kube-scheduler-ha-688277-m03" in "kube-system" namespace to be "Ready" ...
	E0920 19:58:12.297382  780633 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-688277-m03" hosting pod "kube-scheduler-ha-688277-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-688277-m03": nodes "ha-688277-m03" not found
	I0920 19:58:12.297390  780633 pod_ready.go:39] duration metric: took 6.413301599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:58:12.297406  780633 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:58:12.297469  780633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:58:12.316463  780633 api_server.go:72] duration metric: took 25.506009681s to wait for apiserver process to appear ...
	I0920 19:58:12.316542  780633 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:58:12.316578  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:12.326144  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:12.326180  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:12.817023  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:12.825169  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:12.825206  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:13.316751  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:13.325038  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:13.325074  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:13.816711  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:13.824561  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:13.824641  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:14.317327  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:14.325393  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:14.325439  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:14.816769  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:14.825201  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:14.825234  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:15.316800  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:15.324572  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:15.324603  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:15.817273  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:15.826892  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:15.826925  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:16.317622  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:16.325745  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:16.325832  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:16.817161  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:16.825305  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:16.825336  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:17.316789  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:17.324748  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:17.324788  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:17.817715  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:17.826555  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:17.826587  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:18.316761  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:18.324524  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:18.324552  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:18.816777  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:18.825487  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:18.825570  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:19.316844  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:19.324532  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:19.324569  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:19.816846  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:19.826337  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:19.826367  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:20.316769  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:20.324763  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:20.324804  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:20.817373  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:20.827703  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:20.827734  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:21.317055  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:21.325181  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:21.325209  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:21.816812  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:21.825088  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:21.825119  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:22.316745  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:22.324592  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:22.324626  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:22.817401  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:22.825275  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:22.825305  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:23.316781  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:23.324738  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:23.324774  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:23.817451  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:23.825590  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:23.825692  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:24.316843  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:24.449835  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:24.449864  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:24.817413  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:24.879591  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:24.879623  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:25.317268  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:25.343275  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:25.343299  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:25.816761  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:25.827259  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:25.827293  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:26.316807  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:26.328889  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:26.328926  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:26.817433  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:26.827851  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:26.827913  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:27.317621  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:27.326354  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:27.326387  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:27.817300  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:27.825305  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:27.825341  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:28.316783  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:28.324574  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:28.324607  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:28.817356  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:28.825828  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:28.825873  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:29.317592  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:29.325522  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:29.325557  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:29.817325  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:29.825811  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:29.825849  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:30.317596  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:30.325617  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:30.325651  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:30.817396  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:30.825838  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:30.825872  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:31.317605  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:31.325800  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:31.325830  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:31.817371  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:31.825346  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:31.825410  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:32.316797  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:32.340875  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:32.340907  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:32.817556  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:32.825274  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:32.825303  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:33.316861  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:33.324783  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:33.324829  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:33.817531  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:33.825702  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:33.825738  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:34.317347  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:34.327292  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:34.327321  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:34.817046  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:34.824806  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:34.824845  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:35.317432  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:35.325249  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:35.325281  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:35.817379  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:35.826040  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:35.826076  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:36.316730  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:36.325003  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:36.325040  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:36.816649  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:36.824963  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:36.824995  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:37.317323  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:37.327417  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:37.327505  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:37.817400  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:37.825548  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:37.825583  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:38.316806  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:38.326589  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:38.326618  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:38.817141  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:38.825301  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:38.825334  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:39.316849  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:39.327566  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:39.327603  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:39.817348  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:39.825271  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:39.825311  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:40.316770  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:40.327137  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:40.327224  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:40.816766  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:40.824967  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:40.825007  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:41.317600  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:41.326950  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:41.326985  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:41.817601  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:41.826912  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:41.826986  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:42.317390  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:42.328553  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:42.328589  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:42.817242  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:42.825342  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:42.825377  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:43.316893  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:43.324733  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:43.324781  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:43.817421  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:43.829792  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:43.829829  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:44.316722  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:44.325020  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:44.325065  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:44.817642  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:44.826433  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:44.826461  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:45.317475  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:45.325522  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:45.325551  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:45.817259  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:45.825351  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:45.825380  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:46.316946  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:46.324741  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:46.324770  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:46.817533  780633 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:58:46.817629  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:58:46.864609  780633 cri.go:89] found id: "3012d2e9a8bfc227514bb90848fb55d97556e254e7e9b097f88e3e728fac99d9"
	I0920 19:58:46.864644  780633 cri.go:89] found id: "237714fe3e3e4d36a464a9c34cc79cfd30692813df596915f754242c4dd1568e"
	I0920 19:58:46.864653  780633 cri.go:89] found id: ""
	I0920 19:58:46.864665  780633 logs.go:276] 2 containers: [3012d2e9a8bfc227514bb90848fb55d97556e254e7e9b097f88e3e728fac99d9 237714fe3e3e4d36a464a9c34cc79cfd30692813df596915f754242c4dd1568e]
	I0920 19:58:46.864849  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:46.869245  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:46.874399  780633 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:58:46.874521  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:58:46.930163  780633 cri.go:89] found id: "d9ca88db9e45350759db6ad0a1a826c73d5034ebd35106378b7c6c862bc04074"
	I0920 19:58:46.930255  780633 cri.go:89] found id: "4f3be0c7581ea663796db3a5245ff773b8bf724b187e084332786f497cb536cd"
	I0920 19:58:46.930276  780633 cri.go:89] found id: ""
	I0920 19:58:46.930300  780633 logs.go:276] 2 containers: [d9ca88db9e45350759db6ad0a1a826c73d5034ebd35106378b7c6c862bc04074 4f3be0c7581ea663796db3a5245ff773b8bf724b187e084332786f497cb536cd]
	I0920 19:58:46.930401  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:46.935285  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:46.940097  780633 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:58:46.940174  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:58:46.981525  780633 cri.go:89] found id: ""
	I0920 19:58:46.981593  780633 logs.go:276] 0 containers: []
	W0920 19:58:46.981614  780633 logs.go:278] No container was found matching "coredns"
	I0920 19:58:46.981621  780633 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:58:46.981686  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:58:47.024288  780633 cri.go:89] found id: "fe1033da90dedd2eabe01a3c22475ed24642d00aab870bf319921f89a3ab201e"
	I0920 19:58:47.024315  780633 cri.go:89] found id: "797b72a0f21d23637411cde96ad5976c2882f05d22738a36bf9760fa6b9dcf30"
	I0920 19:58:47.024321  780633 cri.go:89] found id: ""
	I0920 19:58:47.024328  780633 logs.go:276] 2 containers: [fe1033da90dedd2eabe01a3c22475ed24642d00aab870bf319921f89a3ab201e 797b72a0f21d23637411cde96ad5976c2882f05d22738a36bf9760fa6b9dcf30]
	I0920 19:58:47.024389  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:47.028578  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:47.032810  780633 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:58:47.032900  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:58:47.074609  780633 cri.go:89] found id: "8941945e4406168f5255f73d116cd7ddbd086bf3422af191dd49a5790e8e99fd"
	I0920 19:58:47.074632  780633 cri.go:89] found id: ""
	I0920 19:58:47.074640  780633 logs.go:276] 1 containers: [8941945e4406168f5255f73d116cd7ddbd086bf3422af191dd49a5790e8e99fd]
	I0920 19:58:47.074699  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:47.078406  780633 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:58:47.078489  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:58:47.137197  780633 cri.go:89] found id: "7bc9618320efe74e015bdee15d9f93e97e75a43afeb3bc10f38067a17ebbdac4"
	I0920 19:58:47.137271  780633 cri.go:89] found id: "eabf0d7d85251f80284f137d0bdd3a8262dd49e907c70d99a26b46f53b4b017e"
	I0920 19:58:47.137281  780633 cri.go:89] found id: ""
	I0920 19:58:47.137289  780633 logs.go:276] 2 containers: [7bc9618320efe74e015bdee15d9f93e97e75a43afeb3bc10f38067a17ebbdac4 eabf0d7d85251f80284f137d0bdd3a8262dd49e907c70d99a26b46f53b4b017e]
	I0920 19:58:47.137388  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:47.141475  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:47.145133  780633 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:58:47.145209  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:58:47.191647  780633 cri.go:89] found id: "c0e410c461b9044774fea864305bcd4a2ca9e08a48b244ea07eae86c07901de3"
	I0920 19:58:47.191670  780633 cri.go:89] found id: ""
	I0920 19:58:47.191678  780633 logs.go:276] 1 containers: [c0e410c461b9044774fea864305bcd4a2ca9e08a48b244ea07eae86c07901de3]
	I0920 19:58:47.191738  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:47.196047  780633 logs.go:123] Gathering logs for etcd [d9ca88db9e45350759db6ad0a1a826c73d5034ebd35106378b7c6c862bc04074] ...
	I0920 19:58:47.196076  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9ca88db9e45350759db6ad0a1a826c73d5034ebd35106378b7c6c862bc04074"
	I0920 19:58:47.249897  780633 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:58:47.249933  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:58:47.319732  780633 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:58:47.319767  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:58:47.582817  780633 logs.go:123] Gathering logs for kube-apiserver [3012d2e9a8bfc227514bb90848fb55d97556e254e7e9b097f88e3e728fac99d9] ...
	I0920 19:58:47.582851  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3012d2e9a8bfc227514bb90848fb55d97556e254e7e9b097f88e3e728fac99d9"
	I0920 19:58:47.642065  780633 logs.go:123] Gathering logs for kube-apiserver [237714fe3e3e4d36a464a9c34cc79cfd30692813df596915f754242c4dd1568e] ...
	I0920 19:58:47.642101  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 237714fe3e3e4d36a464a9c34cc79cfd30692813df596915f754242c4dd1568e"
	I0920 19:58:47.687387  780633 logs.go:123] Gathering logs for kube-scheduler [fe1033da90dedd2eabe01a3c22475ed24642d00aab870bf319921f89a3ab201e] ...
	I0920 19:58:47.687417  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe1033da90dedd2eabe01a3c22475ed24642d00aab870bf319921f89a3ab201e"
	I0920 19:58:47.729236  780633 logs.go:123] Gathering logs for kindnet [c0e410c461b9044774fea864305bcd4a2ca9e08a48b244ea07eae86c07901de3] ...
	I0920 19:58:47.729266  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e410c461b9044774fea864305bcd4a2ca9e08a48b244ea07eae86c07901de3"
	I0920 19:58:47.773378  780633 logs.go:123] Gathering logs for container status ...
	I0920 19:58:47.773421  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:58:47.820550  780633 logs.go:123] Gathering logs for kubelet ...
	I0920 19:58:47.820587  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:58:47.908466  780633 logs.go:123] Gathering logs for kube-scheduler [797b72a0f21d23637411cde96ad5976c2882f05d22738a36bf9760fa6b9dcf30] ...
	I0920 19:58:47.908550  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 797b72a0f21d23637411cde96ad5976c2882f05d22738a36bf9760fa6b9dcf30"
	I0920 19:58:47.947643  780633 logs.go:123] Gathering logs for kube-controller-manager [eabf0d7d85251f80284f137d0bdd3a8262dd49e907c70d99a26b46f53b4b017e] ...
	I0920 19:58:47.947685  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eabf0d7d85251f80284f137d0bdd3a8262dd49e907c70d99a26b46f53b4b017e"
	I0920 19:58:47.991726  780633 logs.go:123] Gathering logs for dmesg ...
	I0920 19:58:47.991768  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:58:48.009065  780633 logs.go:123] Gathering logs for kube-proxy [8941945e4406168f5255f73d116cd7ddbd086bf3422af191dd49a5790e8e99fd] ...
	I0920 19:58:48.009192  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8941945e4406168f5255f73d116cd7ddbd086bf3422af191dd49a5790e8e99fd"
	I0920 19:58:48.066735  780633 logs.go:123] Gathering logs for kube-controller-manager [7bc9618320efe74e015bdee15d9f93e97e75a43afeb3bc10f38067a17ebbdac4] ...
	I0920 19:58:48.066767  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7bc9618320efe74e015bdee15d9f93e97e75a43afeb3bc10f38067a17ebbdac4"
	I0920 19:58:48.153879  780633 logs.go:123] Gathering logs for etcd [4f3be0c7581ea663796db3a5245ff773b8bf724b187e084332786f497cb536cd] ...
	I0920 19:58:48.153956  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f3be0c7581ea663796db3a5245ff773b8bf724b187e084332786f497cb536cd"
	I0920 19:58:50.741532  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:53.600154  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 19:58:53.600182  780633 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 19:58:53.600213  780633 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:58:53.600280  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:58:53.671014  780633 cri.go:89] found id: "3012d2e9a8bfc227514bb90848fb55d97556e254e7e9b097f88e3e728fac99d9"
	I0920 19:58:53.671035  780633 cri.go:89] found id: "237714fe3e3e4d36a464a9c34cc79cfd30692813df596915f754242c4dd1568e"
	I0920 19:58:53.671041  780633 cri.go:89] found id: ""
	I0920 19:58:53.671048  780633 logs.go:276] 2 containers: [3012d2e9a8bfc227514bb90848fb55d97556e254e7e9b097f88e3e728fac99d9 237714fe3e3e4d36a464a9c34cc79cfd30692813df596915f754242c4dd1568e]
	I0920 19:58:53.671107  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:53.676432  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:53.682458  780633 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:58:53.682537  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:58:53.742949  780633 cri.go:89] found id: "d9ca88db9e45350759db6ad0a1a826c73d5034ebd35106378b7c6c862bc04074"
	I0920 19:58:53.742971  780633 cri.go:89] found id: "4f3be0c7581ea663796db3a5245ff773b8bf724b187e084332786f497cb536cd"
	I0920 19:58:53.742976  780633 cri.go:89] found id: ""
	I0920 19:58:53.742984  780633 logs.go:276] 2 containers: [d9ca88db9e45350759db6ad0a1a826c73d5034ebd35106378b7c6c862bc04074 4f3be0c7581ea663796db3a5245ff773b8bf724b187e084332786f497cb536cd]
	I0920 19:58:53.743040  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:53.747055  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:53.751257  780633 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:58:53.751342  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:58:53.806803  780633 cri.go:89] found id: ""
	I0920 19:58:53.806837  780633 logs.go:276] 0 containers: []
	W0920 19:58:53.806847  780633 logs.go:278] No container was found matching "coredns"
	I0920 19:58:53.806853  780633 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:58:53.806927  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:58:53.856084  780633 cri.go:89] found id: "fe1033da90dedd2eabe01a3c22475ed24642d00aab870bf319921f89a3ab201e"
	I0920 19:58:53.856106  780633 cri.go:89] found id: "797b72a0f21d23637411cde96ad5976c2882f05d22738a36bf9760fa6b9dcf30"
	I0920 19:58:53.856112  780633 cri.go:89] found id: ""
	I0920 19:58:53.856119  780633 logs.go:276] 2 containers: [fe1033da90dedd2eabe01a3c22475ed24642d00aab870bf319921f89a3ab201e 797b72a0f21d23637411cde96ad5976c2882f05d22738a36bf9760fa6b9dcf30]
	I0920 19:58:53.856207  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:53.860305  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:53.864128  780633 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:58:53.864213  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:58:53.923593  780633 cri.go:89] found id: "8941945e4406168f5255f73d116cd7ddbd086bf3422af191dd49a5790e8e99fd"
	I0920 19:58:53.923617  780633 cri.go:89] found id: ""
	I0920 19:58:53.923625  780633 logs.go:276] 1 containers: [8941945e4406168f5255f73d116cd7ddbd086bf3422af191dd49a5790e8e99fd]
	I0920 19:58:53.923681  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:53.934129  780633 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:58:53.934206  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:58:53.980052  780633 cri.go:89] found id: "7bc9618320efe74e015bdee15d9f93e97e75a43afeb3bc10f38067a17ebbdac4"
	I0920 19:58:53.980078  780633 cri.go:89] found id: "eabf0d7d85251f80284f137d0bdd3a8262dd49e907c70d99a26b46f53b4b017e"
	I0920 19:58:53.980082  780633 cri.go:89] found id: ""
	I0920 19:58:53.980090  780633 logs.go:276] 2 containers: [7bc9618320efe74e015bdee15d9f93e97e75a43afeb3bc10f38067a17ebbdac4 eabf0d7d85251f80284f137d0bdd3a8262dd49e907c70d99a26b46f53b4b017e]
	I0920 19:58:53.980178  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:53.984270  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:53.987927  780633 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:58:53.988015  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:58:54.038258  780633 cri.go:89] found id: "c0e410c461b9044774fea864305bcd4a2ca9e08a48b244ea07eae86c07901de3"
	I0920 19:58:54.038292  780633 cri.go:89] found id: ""
	I0920 19:58:54.038302  780633 logs.go:276] 1 containers: [c0e410c461b9044774fea864305bcd4a2ca9e08a48b244ea07eae86c07901de3]
	I0920 19:58:54.038377  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:54.043414  780633 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:58:54.043444  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:58:54.351359  780633 logs.go:123] Gathering logs for kube-proxy [8941945e4406168f5255f73d116cd7ddbd086bf3422af191dd49a5790e8e99fd] ...
	I0920 19:58:54.351397  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8941945e4406168f5255f73d116cd7ddbd086bf3422af191dd49a5790e8e99fd"
	I0920 19:58:54.424295  780633 logs.go:123] Gathering logs for kube-controller-manager [eabf0d7d85251f80284f137d0bdd3a8262dd49e907c70d99a26b46f53b4b017e] ...
	I0920 19:58:54.424329  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eabf0d7d85251f80284f137d0bdd3a8262dd49e907c70d99a26b46f53b4b017e"
	I0920 19:58:54.495936  780633 logs.go:123] Gathering logs for container status ...
	I0920 19:58:54.495973  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:58:54.557813  780633 logs.go:123] Gathering logs for kube-apiserver [3012d2e9a8bfc227514bb90848fb55d97556e254e7e9b097f88e3e728fac99d9] ...
	I0920 19:58:54.557850  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3012d2e9a8bfc227514bb90848fb55d97556e254e7e9b097f88e3e728fac99d9"
	I0920 19:58:54.640911  780633 logs.go:123] Gathering logs for etcd [d9ca88db9e45350759db6ad0a1a826c73d5034ebd35106378b7c6c862bc04074] ...
	I0920 19:58:54.640964  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9ca88db9e45350759db6ad0a1a826c73d5034ebd35106378b7c6c862bc04074"
	I0920 19:58:54.725259  780633 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:58:54.725300  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:58:54.809123  780633 logs.go:123] Gathering logs for dmesg ...
	I0920 19:58:54.809164  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:58:54.831421  780633 logs.go:123] Gathering logs for kube-controller-manager [7bc9618320efe74e015bdee15d9f93e97e75a43afeb3bc10f38067a17ebbdac4] ...
	I0920 19:58:54.831454  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7bc9618320efe74e015bdee15d9f93e97e75a43afeb3bc10f38067a17ebbdac4"
	I0920 19:58:54.942516  780633 logs.go:123] Gathering logs for kindnet [c0e410c461b9044774fea864305bcd4a2ca9e08a48b244ea07eae86c07901de3] ...
	I0920 19:58:54.942555  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e410c461b9044774fea864305bcd4a2ca9e08a48b244ea07eae86c07901de3"
	I0920 19:58:54.992825  780633 logs.go:123] Gathering logs for kubelet ...
	I0920 19:58:54.992855  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:58:55.090620  780633 logs.go:123] Gathering logs for kube-apiserver [237714fe3e3e4d36a464a9c34cc79cfd30692813df596915f754242c4dd1568e] ...
	I0920 19:58:55.090665  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 237714fe3e3e4d36a464a9c34cc79cfd30692813df596915f754242c4dd1568e"
	I0920 19:58:55.144449  780633 logs.go:123] Gathering logs for etcd [4f3be0c7581ea663796db3a5245ff773b8bf724b187e084332786f497cb536cd] ...
	I0920 19:58:55.144480  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f3be0c7581ea663796db3a5245ff773b8bf724b187e084332786f497cb536cd"
	I0920 19:58:55.211733  780633 logs.go:123] Gathering logs for kube-scheduler [fe1033da90dedd2eabe01a3c22475ed24642d00aab870bf319921f89a3ab201e] ...
	I0920 19:58:55.211771  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe1033da90dedd2eabe01a3c22475ed24642d00aab870bf319921f89a3ab201e"
	I0920 19:58:55.256486  780633 logs.go:123] Gathering logs for kube-scheduler [797b72a0f21d23637411cde96ad5976c2882f05d22738a36bf9760fa6b9dcf30] ...
	I0920 19:58:55.256521  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 797b72a0f21d23637411cde96ad5976c2882f05d22738a36bf9760fa6b9dcf30"
	I0920 19:58:57.802594  780633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:58:57.812269  780633 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 19:58:57.812406  780633 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0920 19:58:57.812430  780633 round_trippers.go:469] Request Headers:
	I0920 19:58:57.812441  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:58:57.812445  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:58:57.825833  780633 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0920 19:58:57.825953  780633 api_server.go:141] control plane version: v1.31.1
	I0920 19:58:57.825975  780633 api_server.go:131] duration metric: took 45.509413514s to wait for apiserver health ...
	I0920 19:58:57.825983  780633 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:58:57.826005  780633 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 19:58:57.826070  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 19:58:57.876430  780633 cri.go:89] found id: "3012d2e9a8bfc227514bb90848fb55d97556e254e7e9b097f88e3e728fac99d9"
	I0920 19:58:57.876453  780633 cri.go:89] found id: "237714fe3e3e4d36a464a9c34cc79cfd30692813df596915f754242c4dd1568e"
	I0920 19:58:57.876459  780633 cri.go:89] found id: ""
	I0920 19:58:57.876466  780633 logs.go:276] 2 containers: [3012d2e9a8bfc227514bb90848fb55d97556e254e7e9b097f88e3e728fac99d9 237714fe3e3e4d36a464a9c34cc79cfd30692813df596915f754242c4dd1568e]
	I0920 19:58:57.876524  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:57.880611  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:57.884670  780633 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 19:58:57.884846  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 19:58:57.933289  780633 cri.go:89] found id: "d9ca88db9e45350759db6ad0a1a826c73d5034ebd35106378b7c6c862bc04074"
	I0920 19:58:57.933353  780633 cri.go:89] found id: "4f3be0c7581ea663796db3a5245ff773b8bf724b187e084332786f497cb536cd"
	I0920 19:58:57.933372  780633 cri.go:89] found id: ""
	I0920 19:58:57.933395  780633 logs.go:276] 2 containers: [d9ca88db9e45350759db6ad0a1a826c73d5034ebd35106378b7c6c862bc04074 4f3be0c7581ea663796db3a5245ff773b8bf724b187e084332786f497cb536cd]
	I0920 19:58:57.933466  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:57.937241  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:57.940805  780633 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 19:58:57.940927  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 19:58:57.978893  780633 cri.go:89] found id: ""
	I0920 19:58:57.978916  780633 logs.go:276] 0 containers: []
	W0920 19:58:57.978926  780633 logs.go:278] No container was found matching "coredns"
	I0920 19:58:57.978933  780633 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 19:58:57.979035  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 19:58:58.024058  780633 cri.go:89] found id: "fe1033da90dedd2eabe01a3c22475ed24642d00aab870bf319921f89a3ab201e"
	I0920 19:58:58.024082  780633 cri.go:89] found id: "797b72a0f21d23637411cde96ad5976c2882f05d22738a36bf9760fa6b9dcf30"
	I0920 19:58:58.024087  780633 cri.go:89] found id: ""
	I0920 19:58:58.024094  780633 logs.go:276] 2 containers: [fe1033da90dedd2eabe01a3c22475ed24642d00aab870bf319921f89a3ab201e 797b72a0f21d23637411cde96ad5976c2882f05d22738a36bf9760fa6b9dcf30]
	I0920 19:58:58.024191  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:58.029567  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:58.035907  780633 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 19:58:58.035986  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 19:58:58.091781  780633 cri.go:89] found id: "8941945e4406168f5255f73d116cd7ddbd086bf3422af191dd49a5790e8e99fd"
	I0920 19:58:58.091802  780633 cri.go:89] found id: ""
	I0920 19:58:58.091809  780633 logs.go:276] 1 containers: [8941945e4406168f5255f73d116cd7ddbd086bf3422af191dd49a5790e8e99fd]
	I0920 19:58:58.091870  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:58.095848  780633 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 19:58:58.095950  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 19:58:58.138956  780633 cri.go:89] found id: "7bc9618320efe74e015bdee15d9f93e97e75a43afeb3bc10f38067a17ebbdac4"
	I0920 19:58:58.139047  780633 cri.go:89] found id: "eabf0d7d85251f80284f137d0bdd3a8262dd49e907c70d99a26b46f53b4b017e"
	I0920 19:58:58.139067  780633 cri.go:89] found id: ""
	I0920 19:58:58.139090  780633 logs.go:276] 2 containers: [7bc9618320efe74e015bdee15d9f93e97e75a43afeb3bc10f38067a17ebbdac4 eabf0d7d85251f80284f137d0bdd3a8262dd49e907c70d99a26b46f53b4b017e]
	I0920 19:58:58.139190  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:58.143510  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:58.146988  780633 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 19:58:58.147070  780633 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 19:58:58.190555  780633 cri.go:89] found id: "c0e410c461b9044774fea864305bcd4a2ca9e08a48b244ea07eae86c07901de3"
	I0920 19:58:58.190578  780633 cri.go:89] found id: ""
	I0920 19:58:58.190587  780633 logs.go:276] 1 containers: [c0e410c461b9044774fea864305bcd4a2ca9e08a48b244ea07eae86c07901de3]
	I0920 19:58:58.190648  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:58:58.194497  780633 logs.go:123] Gathering logs for kube-apiserver [237714fe3e3e4d36a464a9c34cc79cfd30692813df596915f754242c4dd1568e] ...
	I0920 19:58:58.194524  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 237714fe3e3e4d36a464a9c34cc79cfd30692813df596915f754242c4dd1568e"
	I0920 19:58:58.237545  780633 logs.go:123] Gathering logs for kube-controller-manager [eabf0d7d85251f80284f137d0bdd3a8262dd49e907c70d99a26b46f53b4b017e] ...
	I0920 19:58:58.237575  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eabf0d7d85251f80284f137d0bdd3a8262dd49e907c70d99a26b46f53b4b017e"
	I0920 19:58:58.277306  780633 logs.go:123] Gathering logs for dmesg ...
	I0920 19:58:58.277392  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 19:58:58.294830  780633 logs.go:123] Gathering logs for kube-apiserver [3012d2e9a8bfc227514bb90848fb55d97556e254e7e9b097f88e3e728fac99d9] ...
	I0920 19:58:58.294864  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3012d2e9a8bfc227514bb90848fb55d97556e254e7e9b097f88e3e728fac99d9"
	I0920 19:58:58.359627  780633 logs.go:123] Gathering logs for CRI-O ...
	I0920 19:58:58.359665  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 19:58:58.439766  780633 logs.go:123] Gathering logs for container status ...
	I0920 19:58:58.439804  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 19:58:58.488609  780633 logs.go:123] Gathering logs for kubelet ...
	I0920 19:58:58.488655  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 19:58:58.587564  780633 logs.go:123] Gathering logs for describe nodes ...
	I0920 19:58:58.587647  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 19:58:58.894065  780633 logs.go:123] Gathering logs for etcd [d9ca88db9e45350759db6ad0a1a826c73d5034ebd35106378b7c6c862bc04074] ...
	I0920 19:58:58.894102  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9ca88db9e45350759db6ad0a1a826c73d5034ebd35106378b7c6c862bc04074"
	I0920 19:58:58.961293  780633 logs.go:123] Gathering logs for etcd [4f3be0c7581ea663796db3a5245ff773b8bf724b187e084332786f497cb536cd] ...
	I0920 19:58:58.961329  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f3be0c7581ea663796db3a5245ff773b8bf724b187e084332786f497cb536cd"
	I0920 19:58:59.041535  780633 logs.go:123] Gathering logs for kube-scheduler [fe1033da90dedd2eabe01a3c22475ed24642d00aab870bf319921f89a3ab201e] ...
	I0920 19:58:59.041573  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe1033da90dedd2eabe01a3c22475ed24642d00aab870bf319921f89a3ab201e"
	I0920 19:58:59.084525  780633 logs.go:123] Gathering logs for kube-proxy [8941945e4406168f5255f73d116cd7ddbd086bf3422af191dd49a5790e8e99fd] ...
	I0920 19:58:59.084555  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8941945e4406168f5255f73d116cd7ddbd086bf3422af191dd49a5790e8e99fd"
	I0920 19:58:59.128478  780633 logs.go:123] Gathering logs for kube-scheduler [797b72a0f21d23637411cde96ad5976c2882f05d22738a36bf9760fa6b9dcf30] ...
	I0920 19:58:59.128507  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 797b72a0f21d23637411cde96ad5976c2882f05d22738a36bf9760fa6b9dcf30"
	I0920 19:58:59.170691  780633 logs.go:123] Gathering logs for kube-controller-manager [7bc9618320efe74e015bdee15d9f93e97e75a43afeb3bc10f38067a17ebbdac4] ...
	I0920 19:58:59.170772  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7bc9618320efe74e015bdee15d9f93e97e75a43afeb3bc10f38067a17ebbdac4"
	I0920 19:58:59.239874  780633 logs.go:123] Gathering logs for kindnet [c0e410c461b9044774fea864305bcd4a2ca9e08a48b244ea07eae86c07901de3] ...
	I0920 19:58:59.239954  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e410c461b9044774fea864305bcd4a2ca9e08a48b244ea07eae86c07901de3"
	I0920 19:59:01.789063  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0920 19:59:01.789092  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:01.789108  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:01.789124  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:01.797812  780633 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 19:59:01.805678  780633 system_pods.go:59] 19 kube-system pods found
	I0920 19:59:01.805724  780633 system_pods.go:61] "coredns-7c65d6cfc9-f5x4v" [dfecd768-54d2-4c3b-8979-be893d5749e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:59:01.805734  780633 system_pods.go:61] "coredns-7c65d6cfc9-srdh5" [61afd12f-8ffa-4a8d-8403-1410795d1a51] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:59:01.805741  780633 system_pods.go:61] "etcd-ha-688277" [003734ca-45bf-45d6-9363-e514d0d7187b] Running
	I0920 19:59:01.805747  780633 system_pods.go:61] "etcd-ha-688277-m02" [f37d0956-8de3-4e59-86fd-990f49a1ba39] Running
	I0920 19:59:01.805751  780633 system_pods.go:61] "kindnet-6xnsl" [1769ff4d-b4a0-485d-9872-54372f8d9473] Running
	I0920 19:59:01.805754  780633 system_pods.go:61] "kindnet-d4b7m" [c63328de-a6d6-499c-88ef-df1548d6b305] Running
	I0920 19:59:01.805762  780633 system_pods.go:61] "kindnet-h85n4" [671651c0-6c07-4766-8b15-9a58a28b5813] Running
	I0920 19:59:01.805769  780633 system_pods.go:61] "kube-apiserver-ha-688277" [b0d2ced8-6adb-40fa-aa55-1622b6b4c5bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:59:01.805778  780633 system_pods.go:61] "kube-apiserver-ha-688277-m02" [be16eded-0097-4d94-959b-a63a68884108] Running
	I0920 19:59:01.805786  780633 system_pods.go:61] "kube-controller-manager-ha-688277" [bda9b12b-9657-4214-84e3-1935b851b22d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:59:01.805797  780633 system_pods.go:61] "kube-controller-manager-ha-688277-m02" [f056d347-efd8-47e4-b560-acfaa9de00d4] Running
	I0920 19:59:01.805802  780633 system_pods.go:61] "kube-proxy-596wf" [09813dbe-c4ae-4efa-ac8b-34bb2367ab63] Running
	I0920 19:59:01.805806  780633 system_pods.go:61] "kube-proxy-czqf2" [d7ed41c2-0ccf-439c-8e86-206d97af79bd] Running
	I0920 19:59:01.805814  780633 system_pods.go:61] "kube-proxy-l769r" [eb9fa08e-f8d4-4f7d-b274-1ee7c2507157] Running
	I0920 19:59:01.805817  780633 system_pods.go:61] "kube-scheduler-ha-688277" [d1c93d06-842b-4d44-a46b-cf264376c2ad] Running
	I0920 19:59:01.805821  780633 system_pods.go:61] "kube-scheduler-ha-688277-m02" [4b9081e9-a583-4d4b-8756-4b7ecac824aa] Running
	I0920 19:59:01.805827  780633 system_pods.go:61] "kube-vip-ha-688277" [94084243-436d-4ecc-9b09-dc528b92edf7] Running
	I0920 19:59:01.805832  780633 system_pods.go:61] "kube-vip-ha-688277-m02" [155862be-39d8-433f-a85c-258e03c28023] Running
	I0920 19:59:01.805836  780633 system_pods.go:61] "storage-provisioner" [caad46ab-c6af-4e88-9d8d-e9e4f8f00b38] Running
	I0920 19:59:01.805841  780633 system_pods.go:74] duration metric: took 3.979852841s to wait for pod list to return data ...
	I0920 19:59:01.805852  780633 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:59:01.805948  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0920 19:59:01.805957  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:01.805976  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:01.805981  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:01.809887  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:01.810192  780633 default_sa.go:45] found service account: "default"
	I0920 19:59:01.810214  780633 default_sa.go:55] duration metric: took 4.35545ms for default service account to be created ...
	I0920 19:59:01.810224  780633 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:59:01.810300  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0920 19:59:01.810311  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:01.810320  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:01.810326  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:01.815280  780633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 19:59:01.826714  780633 system_pods.go:86] 19 kube-system pods found
	I0920 19:59:01.826763  780633 system_pods.go:89] "coredns-7c65d6cfc9-f5x4v" [dfecd768-54d2-4c3b-8979-be893d5749e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:59:01.826776  780633 system_pods.go:89] "coredns-7c65d6cfc9-srdh5" [61afd12f-8ffa-4a8d-8403-1410795d1a51] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 19:59:01.826809  780633 system_pods.go:89] "etcd-ha-688277" [003734ca-45bf-45d6-9363-e514d0d7187b] Running
	I0920 19:59:01.826824  780633 system_pods.go:89] "etcd-ha-688277-m02" [f37d0956-8de3-4e59-86fd-990f49a1ba39] Running
	I0920 19:59:01.826830  780633 system_pods.go:89] "kindnet-6xnsl" [1769ff4d-b4a0-485d-9872-54372f8d9473] Running
	I0920 19:59:01.826835  780633 system_pods.go:89] "kindnet-d4b7m" [c63328de-a6d6-499c-88ef-df1548d6b305] Running
	I0920 19:59:01.826840  780633 system_pods.go:89] "kindnet-h85n4" [671651c0-6c07-4766-8b15-9a58a28b5813] Running
	I0920 19:59:01.826847  780633 system_pods.go:89] "kube-apiserver-ha-688277" [b0d2ced8-6adb-40fa-aa55-1622b6b4c5bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 19:59:01.826858  780633 system_pods.go:89] "kube-apiserver-ha-688277-m02" [be16eded-0097-4d94-959b-a63a68884108] Running
	I0920 19:59:01.826874  780633 system_pods.go:89] "kube-controller-manager-ha-688277" [bda9b12b-9657-4214-84e3-1935b851b22d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 19:59:01.826883  780633 system_pods.go:89] "kube-controller-manager-ha-688277-m02" [f056d347-efd8-47e4-b560-acfaa9de00d4] Running
	I0920 19:59:01.826892  780633 system_pods.go:89] "kube-proxy-596wf" [09813dbe-c4ae-4efa-ac8b-34bb2367ab63] Running
	I0920 19:59:01.826898  780633 system_pods.go:89] "kube-proxy-czqf2" [d7ed41c2-0ccf-439c-8e86-206d97af79bd] Running
	I0920 19:59:01.826912  780633 system_pods.go:89] "kube-proxy-l769r" [eb9fa08e-f8d4-4f7d-b274-1ee7c2507157] Running
	I0920 19:59:01.826917  780633 system_pods.go:89] "kube-scheduler-ha-688277" [d1c93d06-842b-4d44-a46b-cf264376c2ad] Running
	I0920 19:59:01.826921  780633 system_pods.go:89] "kube-scheduler-ha-688277-m02" [4b9081e9-a583-4d4b-8756-4b7ecac824aa] Running
	I0920 19:59:01.826925  780633 system_pods.go:89] "kube-vip-ha-688277" [94084243-436d-4ecc-9b09-dc528b92edf7] Running
	I0920 19:59:01.826933  780633 system_pods.go:89] "kube-vip-ha-688277-m02" [155862be-39d8-433f-a85c-258e03c28023] Running
	I0920 19:59:01.826938  780633 system_pods.go:89] "storage-provisioner" [caad46ab-c6af-4e88-9d8d-e9e4f8f00b38] Running
	I0920 19:59:01.826948  780633 system_pods.go:126] duration metric: took 16.718703ms to wait for k8s-apps to be running ...
	I0920 19:59:01.826955  780633 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:59:01.827025  780633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:59:01.840493  780633 system_svc.go:56] duration metric: took 13.51467ms WaitForService to wait for kubelet
	I0920 19:59:01.840522  780633 kubeadm.go:582] duration metric: took 1m15.030073961s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:59:01.840546  780633 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:59:01.840641  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0920 19:59:01.840651  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:01.840660  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:01.840664  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:01.843671  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:01.846239  780633 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 19:59:01.846322  780633 node_conditions.go:123] node cpu capacity is 2
	I0920 19:59:01.846350  780633 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 19:59:01.846374  780633 node_conditions.go:123] node cpu capacity is 2
	I0920 19:59:01.846409  780633 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 19:59:01.846435  780633 node_conditions.go:123] node cpu capacity is 2
	I0920 19:59:01.846458  780633 node_conditions.go:105] duration metric: took 5.905005ms to run NodePressure ...
	I0920 19:59:01.846486  780633 start.go:241] waiting for startup goroutines ...
	I0920 19:59:01.846546  780633 start.go:255] writing updated cluster config ...
	I0920 19:59:01.849955  780633 out.go:201] 
	I0920 19:59:01.853536  780633 config.go:182] Loaded profile config "ha-688277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:59:01.853707  780633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/config.json ...
	I0920 19:59:01.856942  780633 out.go:177] * Starting "ha-688277-m04" worker node in "ha-688277" cluster
	I0920 19:59:01.860286  780633 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 19:59:01.862961  780633 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:59:01.865773  780633 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:59:01.865815  780633 cache.go:56] Caching tarball of preloaded images
	I0920 19:59:01.865879  780633 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 19:59:01.865934  780633 preload.go:172] Found /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0920 19:59:01.865945  780633 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 19:59:01.866102  780633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/config.json ...
	W0920 19:59:01.887524  780633 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 is of wrong architecture
	I0920 19:59:01.887612  780633 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:59:01.887716  780633 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:59:01.887743  780633 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 19:59:01.887749  780633 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 19:59:01.887773  780633 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 19:59:01.887784  780633 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 19:59:01.889216  780633 image.go:273] response: 
	I0920 19:59:02.033121  780633 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 19:59:02.033176  780633 cache.go:194] Successfully downloaded all kic artifacts
	I0920 19:59:02.033213  780633 start.go:360] acquireMachinesLock for ha-688277-m04: {Name:mk9539df2acea235fe610ef88062d2a7f247188c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:59:02.033294  780633 start.go:364] duration metric: took 59.166µs to acquireMachinesLock for "ha-688277-m04"
	I0920 19:59:02.033320  780633 start.go:96] Skipping create...Using existing machine configuration
	I0920 19:59:02.033332  780633 fix.go:54] fixHost starting: m04
	I0920 19:59:02.033632  780633 cli_runner.go:164] Run: docker container inspect ha-688277-m04 --format={{.State.Status}}
	I0920 19:59:02.051055  780633 fix.go:112] recreateIfNeeded on ha-688277-m04: state=Stopped err=<nil>
	W0920 19:59:02.051083  780633 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 19:59:02.055795  780633 out.go:177] * Restarting existing docker container for "ha-688277-m04" ...
	I0920 19:59:02.058299  780633 cli_runner.go:164] Run: docker start ha-688277-m04
	I0920 19:59:02.370082  780633 cli_runner.go:164] Run: docker container inspect ha-688277-m04 --format={{.State.Status}}
	I0920 19:59:02.397375  780633 kic.go:430] container "ha-688277-m04" state is running.
	I0920 19:59:02.397795  780633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-688277-m04
	I0920 19:59:02.427813  780633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/config.json ...
	I0920 19:59:02.428085  780633 machine.go:93] provisionDockerMachine start ...
	I0920 19:59:02.428165  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m04
	I0920 19:59:02.453107  780633 main.go:141] libmachine: Using SSH client type: native
	I0920 19:59:02.453349  780633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0920 19:59:02.453359  780633 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:59:02.453986  780633 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48564->127.0.0.1:32838: read: connection reset by peer
	I0920 19:59:05.605789  780633 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-688277-m04
	
	I0920 19:59:05.605869  780633 ubuntu.go:169] provisioning hostname "ha-688277-m04"
	I0920 19:59:05.605975  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m04
	I0920 19:59:05.628723  780633 main.go:141] libmachine: Using SSH client type: native
	I0920 19:59:05.628974  780633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0920 19:59:05.628993  780633 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-688277-m04 && echo "ha-688277-m04" | sudo tee /etc/hostname
	I0920 19:59:05.786116  780633 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-688277-m04
	
	I0920 19:59:05.786210  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m04
	I0920 19:59:05.804613  780633 main.go:141] libmachine: Using SSH client type: native
	I0920 19:59:05.804902  780633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0920 19:59:05.804927  780633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-688277-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-688277-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-688277-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:59:05.953210  780633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:59:05.953284  780633 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-712952/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-712952/.minikube}
	I0920 19:59:05.953326  780633 ubuntu.go:177] setting up certificates
	I0920 19:59:05.953370  780633 provision.go:84] configureAuth start
	I0920 19:59:05.953475  780633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-688277-m04
	I0920 19:59:05.977286  780633 provision.go:143] copyHostCerts
	I0920 19:59:05.977332  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem
	I0920 19:59:05.977367  780633 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem, removing ...
	I0920 19:59:05.977380  780633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem
	I0920 19:59:05.977459  780633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/cert.pem (1123 bytes)
	I0920 19:59:05.977549  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem
	I0920 19:59:05.977575  780633 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem, removing ...
	I0920 19:59:05.977583  780633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem
	I0920 19:59:05.977612  780633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/key.pem (1675 bytes)
	I0920 19:59:05.977660  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem
	I0920 19:59:05.977682  780633 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem, removing ...
	I0920 19:59:05.977687  780633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem
	I0920 19:59:05.977715  780633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-712952/.minikube/ca.pem (1082 bytes)
	I0920 19:59:05.977769  780633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem org=jenkins.ha-688277-m04 san=[127.0.0.1 192.168.49.5 ha-688277-m04 localhost minikube]
	I0920 19:59:06.205728  780633 provision.go:177] copyRemoteCerts
	I0920 19:59:06.205846  780633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:59:06.205897  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m04
	I0920 19:59:06.237464  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277-m04/id_rsa Username:docker}
	I0920 19:59:06.343342  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 19:59:06.343407  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 19:59:06.374260  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 19:59:06.374375  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 19:59:06.407928  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 19:59:06.408036  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 19:59:06.438100  780633 provision.go:87] duration metric: took 484.67952ms to configureAuth
	I0920 19:59:06.438246  780633 ubuntu.go:193] setting minikube options for container-runtime
	I0920 19:59:06.438537  780633 config.go:182] Loaded profile config "ha-688277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:59:06.438766  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m04
	I0920 19:59:06.461875  780633 main.go:141] libmachine: Using SSH client type: native
	I0920 19:59:06.462124  780633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0920 19:59:06.462140  780633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 19:59:06.760229  780633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 19:59:06.760255  780633 machine.go:96] duration metric: took 4.332160298s to provisionDockerMachine
	I0920 19:59:06.760267  780633 start.go:293] postStartSetup for "ha-688277-m04" (driver="docker")
	I0920 19:59:06.760283  780633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:59:06.760360  780633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:59:06.760412  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m04
	I0920 19:59:06.782755  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277-m04/id_rsa Username:docker}
	I0920 19:59:06.887529  780633 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:59:06.891451  780633 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 19:59:06.891492  780633 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 19:59:06.891504  780633 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 19:59:06.891511  780633 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 19:59:06.891522  780633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-712952/.minikube/addons for local assets ...
	I0920 19:59:06.891586  780633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-712952/.minikube/files for local assets ...
	I0920 19:59:06.891679  780633 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem -> 7197342.pem in /etc/ssl/certs
	I0920 19:59:06.891690  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem -> /etc/ssl/certs/7197342.pem
	I0920 19:59:06.891793  780633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 19:59:06.901899  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem --> /etc/ssl/certs/7197342.pem (1708 bytes)
	I0920 19:59:06.929762  780633 start.go:296] duration metric: took 169.478717ms for postStartSetup
	I0920 19:59:06.929849  780633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:59:06.929898  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m04
	I0920 19:59:06.947656  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277-m04/id_rsa Username:docker}
	I0920 19:59:07.058333  780633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 19:59:07.064035  780633 fix.go:56] duration metric: took 5.030699229s for fixHost
	I0920 19:59:07.064060  780633 start.go:83] releasing machines lock for "ha-688277-m04", held for 5.030754193s
	I0920 19:59:07.064182  780633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-688277-m04
	I0920 19:59:07.087770  780633 out.go:177] * Found network options:
	I0920 19:59:07.113134  780633 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0920 19:59:07.115745  780633 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 19:59:07.115785  780633 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 19:59:07.115817  780633 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 19:59:07.115831  780633 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 19:59:07.115907  780633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 19:59:07.115949  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m04
	I0920 19:59:07.115963  780633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:59:07.116024  780633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m04
	I0920 19:59:07.149630  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277-m04/id_rsa Username:docker}
	I0920 19:59:07.158089  780633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277-m04/id_rsa Username:docker}
	I0920 19:59:07.404632  780633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 19:59:07.418473  780633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:59:07.429768  780633 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0920 19:59:07.429852  780633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:59:07.439994  780633 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 19:59:07.440018  780633 start.go:495] detecting cgroup driver to use...
	I0920 19:59:07.440065  780633 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 19:59:07.440144  780633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 19:59:07.456395  780633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 19:59:07.469792  780633 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:59:07.469875  780633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:59:07.488363  780633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:59:07.503887  780633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:59:07.608891  780633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:59:07.721986  780633 docker.go:233] disabling docker service ...
	I0920 19:59:07.722063  780633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:59:07.737746  780633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:59:07.751789  780633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:59:07.852443  780633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:59:07.945040  780633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:59:07.959278  780633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:59:07.981900  780633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 19:59:07.981965  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:59:07.998914  780633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 19:59:07.998986  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:59:08.041176  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:59:08.059777  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:59:08.073649  780633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:59:08.092432  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:59:08.106192  780633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:59:08.122345  780633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 19:59:08.135457  780633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:59:08.146733  780633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:59:08.158540  780633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:59:08.299490  780633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 19:59:08.503819  780633 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 19:59:08.503897  780633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 19:59:08.511567  780633 start.go:563] Will wait 60s for crictl version
	I0920 19:59:08.511640  780633 ssh_runner.go:195] Run: which crictl
	I0920 19:59:08.517904  780633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:59:08.580987  780633 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0920 19:59:08.581121  780633 ssh_runner.go:195] Run: crio --version
	I0920 19:59:08.642950  780633 ssh_runner.go:195] Run: crio --version
	I0920 19:59:08.699878  780633 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0920 19:59:08.702543  780633 out.go:177]   - env NO_PROXY=192.168.49.2
	I0920 19:59:08.705333  780633 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0920 19:59:08.708173  780633 cli_runner.go:164] Run: docker network inspect ha-688277 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:59:08.734502  780633 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 19:59:08.739393  780633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:59:08.753847  780633 mustload.go:65] Loading cluster: ha-688277
	I0920 19:59:08.754105  780633 config.go:182] Loaded profile config "ha-688277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:59:08.754389  780633 cli_runner.go:164] Run: docker container inspect ha-688277 --format={{.State.Status}}
	I0920 19:59:08.786182  780633 host.go:66] Checking if "ha-688277" exists ...
	I0920 19:59:08.786501  780633 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277 for IP: 192.168.49.5
	I0920 19:59:08.786518  780633 certs.go:194] generating shared ca certs ...
	I0920 19:59:08.786533  780633 certs.go:226] acquiring lock for ca certs: {Name:mk7d5a5d7b3ae5cfc59d92978e91627e15e3360b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:59:08.786668  780633 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key
	I0920 19:59:08.786715  780633 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key
	I0920 19:59:08.786731  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 19:59:08.786744  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 19:59:08.786762  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 19:59:08.786773  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 19:59:08.786832  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/719734.pem (1338 bytes)
	W0920 19:59:08.786869  780633 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-712952/.minikube/certs/719734_empty.pem, impossibly tiny 0 bytes
	I0920 19:59:08.786880  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 19:59:08.786905  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/ca.pem (1082 bytes)
	I0920 19:59:08.786929  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:59:08.786955  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/key.pem (1675 bytes)
	I0920 19:59:08.787000  780633 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem (1708 bytes)
	I0920 19:59:08.787031  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem -> /usr/share/ca-certificates/7197342.pem
	I0920 19:59:08.787048  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:59:08.787058  780633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19678-712952/.minikube/certs/719734.pem -> /usr/share/ca-certificates/719734.pem
	I0920 19:59:08.787075  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:59:08.816416  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 19:59:08.845838  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:59:08.877791  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:59:08.916009  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/ssl/certs/7197342.pem --> /usr/share/ca-certificates/7197342.pem (1708 bytes)
	I0920 19:59:08.956296  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:59:08.984595  780633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-712952/.minikube/certs/719734.pem --> /usr/share/ca-certificates/719734.pem (1338 bytes)
	I0920 19:59:09.018621  780633 ssh_runner.go:195] Run: openssl version
	I0920 19:59:09.024854  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7197342.pem && ln -fs /usr/share/ca-certificates/7197342.pem /etc/ssl/certs/7197342.pem"
	I0920 19:59:09.039453  780633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7197342.pem
	I0920 19:59:09.044367  780633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 19:45 /usr/share/ca-certificates/7197342.pem
	I0920 19:59:09.044437  780633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7197342.pem
	I0920 19:59:09.057513  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7197342.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 19:59:09.073133  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:59:09.085847  780633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:59:09.091454  780633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:59:09.091526  780633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:59:09.100229  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:59:09.114530  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/719734.pem && ln -fs /usr/share/ca-certificates/719734.pem /etc/ssl/certs/719734.pem"
	I0920 19:59:09.129159  780633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/719734.pem
	I0920 19:59:09.134555  780633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 19:45 /usr/share/ca-certificates/719734.pem
	I0920 19:59:09.134627  780633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/719734.pem
	I0920 19:59:09.143885  780633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/719734.pem /etc/ssl/certs/51391683.0"
	I0920 19:59:09.155845  780633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:59:09.160597  780633 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 19:59:09.160643  780633 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.1  false true} ...
	I0920 19:59:09.160749  780633 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-688277-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-688277 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:59:09.160834  780633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:59:09.172320  780633 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:59:09.172401  780633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0920 19:59:09.183092  780633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0920 19:59:09.205154  780633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:59:09.228894  780633 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0920 19:59:09.239676  780633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:59:09.253324  780633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:59:09.403309  780633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:59:09.428019  780633 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0920 19:59:09.428588  780633 config.go:182] Loaded profile config "ha-688277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:59:09.431673  780633 out.go:177] * Verifying Kubernetes components...
	I0920 19:59:09.434249  780633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:59:09.564187  780633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:59:09.580600  780633 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:59:09.580888  780633 kapi.go:59] client config for ha-688277: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/client.crt", KeyFile:"/home/jenkins/minikube-integration/19678-712952/.minikube/profiles/ha-688277/client.key", CAFile:"/home/jenkins/minikube-integration/19678-712952/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a16ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 19:59:09.580945  780633 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0920 19:59:09.581171  780633 node_ready.go:35] waiting up to 6m0s for node "ha-688277-m04" to be "Ready" ...
	I0920 19:59:09.581251  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:09.581257  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:09.581265  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:09.581268  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:09.583992  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:10.084348  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:10.084375  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:10.084386  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:10.084390  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:10.102154  780633 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0920 19:59:10.581356  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:10.581379  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:10.581389  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:10.581396  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:10.585825  780633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 19:59:11.081421  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:11.081448  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:11.081462  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:11.081467  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:11.104451  780633 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0920 19:59:11.581952  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:11.581978  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:11.581989  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:11.581993  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:11.585207  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:11.585940  780633 node_ready.go:53] node "ha-688277-m04" has status "Ready":"Unknown"
	I0920 19:59:12.082075  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:12.082105  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:12.082116  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:12.082120  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:12.090181  780633 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 19:59:12.581463  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:12.581483  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:12.581493  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:12.581499  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:12.593587  780633 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0920 19:59:13.082039  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:13.082064  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:13.082074  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:13.082078  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:13.085652  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:13.581738  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:13.581761  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:13.581771  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:13.581777  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:13.584768  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:14.081492  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:14.081516  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:14.081526  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:14.081532  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:14.084839  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:14.085658  780633 node_ready.go:53] node "ha-688277-m04" has status "Ready":"Unknown"
	I0920 19:59:14.581962  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:14.581990  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:14.582000  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:14.582004  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:14.585049  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:15.088649  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:15.088683  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:15.088740  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:15.088746  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:15.119064  780633 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0920 19:59:15.582206  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:15.582227  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:15.582237  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:15.582243  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:15.585685  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:16.081972  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:16.082001  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:16.082011  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:16.082019  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:16.089195  780633 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 19:59:16.090149  780633 node_ready.go:53] node "ha-688277-m04" has status "Ready":"Unknown"
	I0920 19:59:16.582041  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:16.582074  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:16.582085  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:16.582095  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:16.585509  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:16.586190  780633 node_ready.go:49] node "ha-688277-m04" has status "Ready":"True"
	I0920 19:59:16.586214  780633 node_ready.go:38] duration metric: took 7.005030783s for node "ha-688277-m04" to be "Ready" ...
	I0920 19:59:16.586226  780633 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:59:16.586307  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0920 19:59:16.586316  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:16.586325  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:16.586330  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:16.592029  780633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 19:59:16.601457  780633 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:16.601678  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:16.601695  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:16.601705  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:16.601714  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:16.605267  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:16.606046  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:16.606099  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:16.606124  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:16.606130  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:16.609352  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:17.102285  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:17.102321  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:17.102331  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:17.102335  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:17.109328  780633 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 19:59:17.110207  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:17.110278  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:17.110304  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:17.110335  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:17.113127  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:17.602747  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:17.602771  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:17.602782  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:17.602786  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:17.605959  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:17.606875  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:17.606894  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:17.606904  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:17.606909  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:17.610019  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:18.101793  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:18.101827  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:18.101841  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:18.101847  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:18.106364  780633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 19:59:18.107165  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:18.107191  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:18.107201  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:18.107209  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:18.111660  780633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 19:59:18.602386  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:18.602418  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:18.602430  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:18.602440  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:18.605773  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:18.606662  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:18.606685  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:18.606695  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:18.606699  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:18.609516  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:18.610105  780633 pod_ready.go:103] pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:59:19.101788  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:19.101815  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:19.101825  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:19.101830  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:19.106298  780633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 19:59:19.107251  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:19.107276  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:19.107294  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:19.107299  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:19.110858  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:19.602663  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:19.602684  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:19.602693  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:19.602698  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:19.605977  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:19.606792  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:19.606808  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:19.606817  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:19.606822  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:19.609541  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:20.101826  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:20.101853  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:20.101863  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:20.101870  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:20.105192  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:20.106135  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:20.106157  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:20.106166  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:20.106170  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:20.109058  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:20.602285  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:20.602310  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:20.602320  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:20.602324  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:20.605605  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:20.606520  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:20.606546  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:20.606556  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:20.606562  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:20.609160  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:21.101799  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:21.101828  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:21.101837  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:21.101841  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:21.106023  780633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 19:59:21.106868  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:21.106886  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:21.106896  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:21.106900  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:21.109732  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:21.110244  780633 pod_ready.go:103] pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:59:21.601936  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:21.601960  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:21.601970  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:21.601979  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:21.605340  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:21.606476  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:21.606532  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:21.606563  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:21.606568  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:21.610025  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:22.102025  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:22.102051  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:22.102061  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:22.102066  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:22.105319  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:22.106221  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:22.106269  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:22.106284  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:22.106317  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:22.109166  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:22.602000  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:22.602024  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:22.602035  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:22.602042  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:22.605889  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:22.606888  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:22.606913  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:22.606923  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:22.606927  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:22.609569  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:23.101736  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:23.101760  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:23.101769  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:23.101774  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:23.104671  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:23.105521  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:23.105538  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:23.105547  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:23.105552  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:23.108266  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:23.601684  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:23.601711  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:23.601721  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:23.601727  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:23.604805  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:23.605741  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:23.605762  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:23.605772  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:23.605777  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:23.608455  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:23.609268  780633 pod_ready.go:103] pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:59:24.101746  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:24.101775  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:24.101786  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:24.101790  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:24.104813  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:24.105620  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:24.105637  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:24.105646  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:24.105655  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:24.109155  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:24.602713  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:24.602756  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:24.602766  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:24.602771  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:24.606069  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:24.607158  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:24.607181  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:24.607191  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:24.607195  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:24.610633  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:25.101665  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:25.101696  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:25.101706  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:25.101710  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:25.105180  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:25.106453  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:25.106476  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:25.106485  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:25.106491  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:25.109352  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:25.601651  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:25.601679  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:25.601690  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:25.601694  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:25.604848  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:25.605655  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:25.605683  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:25.605694  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:25.605698  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:25.608725  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:25.609743  780633 pod_ready.go:103] pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:59:26.101950  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:26.101979  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:26.101989  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:26.101996  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:26.105933  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:26.107040  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:26.107069  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:26.107086  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:26.107090  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:26.109997  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:26.601958  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:26.601981  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:26.601991  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:26.601995  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:26.604837  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:26.605861  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:26.605880  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:26.605890  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:26.605895  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:26.608506  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:27.101843  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:27.101872  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:27.101882  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:27.101887  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:27.105087  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:27.105901  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:27.105930  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:27.105941  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:27.105947  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:27.108743  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:27.602273  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:27.602300  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:27.602311  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:27.602315  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:27.605200  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:27.606052  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:27.606072  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:27.606082  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:27.606086  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:27.608739  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:28.101997  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:28.102021  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:28.102030  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:28.102034  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:28.105079  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:28.106123  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:28.106143  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:28.106153  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:28.106158  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:28.108970  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:28.109658  780633 pod_ready.go:103] pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:59:28.602475  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:28.602499  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:28.602509  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:28.602513  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:28.606070  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:28.606975  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:28.606999  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:28.607008  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:28.607015  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:28.610049  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:29.102667  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:29.102692  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:29.102703  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:29.102708  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:29.105786  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:29.107016  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:29.107038  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:29.107048  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:29.107055  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:29.109902  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:29.602137  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:29.602159  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:29.602169  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:29.602174  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:29.605489  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:29.606407  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:29.606429  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:29.606439  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:29.606443  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:29.609208  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:30.103603  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:30.103691  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:30.103717  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:30.103737  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:30.107302  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:30.108452  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:30.108479  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:30.108491  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:30.108495  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:30.112039  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:30.112773  780633 pod_ready.go:103] pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:59:30.602519  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:30.602548  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:30.602557  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:30.602564  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:30.605651  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:30.606574  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:30.606602  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:30.606613  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:30.606618  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:30.609952  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:31.102503  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:31.102526  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:31.102537  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:31.102542  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:31.106099  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:31.107186  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:31.107211  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:31.107222  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:31.107228  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:31.110841  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:31.602674  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:31.602700  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:31.602709  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:31.602714  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:31.606338  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:31.607304  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:31.607328  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:31.607337  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:31.607342  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:31.611016  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:32.101757  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:32.101781  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:32.101790  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:32.101794  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:32.104780  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:32.105577  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:32.105599  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:32.105609  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:32.105616  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:32.108290  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:32.602577  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:32.602603  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:32.602614  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:32.602619  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:32.606869  780633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 19:59:32.607628  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:32.607649  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:32.607658  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:32.607665  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:32.610863  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:32.611748  780633 pod_ready.go:103] pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:59:33.102699  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:33.102722  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:33.102732  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:33.102736  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:33.105923  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:33.107099  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:33.107121  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:33.107131  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:33.107137  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:33.110135  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:33.602562  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:33.602587  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:33.602596  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:33.602601  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:33.605624  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:33.606649  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:33.606676  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:33.606687  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:33.606693  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:33.609950  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:34.102168  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:34.102194  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:34.102203  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:34.102209  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:34.105475  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:34.106511  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:34.106542  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:34.106550  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:34.106555  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:34.109493  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:34.601994  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:34.602021  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:34.602033  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:34.602037  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:34.605496  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:34.606510  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:34.606531  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:34.606541  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:34.606548  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:34.609474  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:35.102139  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:35.102166  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:35.102175  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:35.102179  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:35.106266  780633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 19:59:35.107513  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:35.107539  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:35.107548  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:35.107553  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:35.110637  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:35.111714  780633 pod_ready.go:103] pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace has status "Ready":"False"
	I0920 19:59:35.602588  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:35.602619  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:35.602631  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:35.602637  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:35.606134  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:35.606934  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:35.606954  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:35.606964  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:35.606968  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:35.609871  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:36.101981  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-f5x4v
	I0920 19:59:36.102015  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.102026  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.102033  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.105358  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:36.106640  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:36.106667  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.106677  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.106682  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.110214  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:36.110862  780633 pod_ready.go:98] node "ha-688277" hosting pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:36.110891  780633 pod_ready.go:82] duration metric: took 19.509384714s for pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace to be "Ready" ...
	E0920 19:59:36.110901  780633 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-688277" hosting pod "coredns-7c65d6cfc9-f5x4v" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:36.110915  780633 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-srdh5" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:36.110986  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-srdh5
	I0920 19:59:36.110997  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.111006  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.111011  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.115007  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:36.116309  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:36.116339  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.116350  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.116355  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.121914  780633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 19:59:36.123111  780633 pod_ready.go:98] node "ha-688277" hosting pod "coredns-7c65d6cfc9-srdh5" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:36.123148  780633 pod_ready.go:82] duration metric: took 12.223858ms for pod "coredns-7c65d6cfc9-srdh5" in "kube-system" namespace to be "Ready" ...
	E0920 19:59:36.123161  780633 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-688277" hosting pod "coredns-7c65d6cfc9-srdh5" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:36.123169  780633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-688277" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:36.123249  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-688277
	I0920 19:59:36.123260  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.123276  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.123283  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.130582  780633 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 19:59:36.131960  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:36.131983  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.131992  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.132001  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.135157  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:36.136490  780633 pod_ready.go:98] node "ha-688277" hosting pod "etcd-ha-688277" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:36.136520  780633 pod_ready.go:82] duration metric: took 13.343932ms for pod "etcd-ha-688277" in "kube-system" namespace to be "Ready" ...
	E0920 19:59:36.136544  780633 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-688277" hosting pod "etcd-ha-688277" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:36.136553  780633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:36.136639  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-688277-m02
	I0920 19:59:36.136646  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.136656  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.136666  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.140716  780633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 19:59:36.141927  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:59:36.141950  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.141960  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.141963  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.145188  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:36.146377  780633 pod_ready.go:93] pod "etcd-ha-688277-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 19:59:36.146413  780633 pod_ready.go:82] duration metric: took 9.849066ms for pod "etcd-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:36.146437  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-688277" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:36.146520  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-688277
	I0920 19:59:36.146537  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.146546  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.146550  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.149960  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:36.151441  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:36.151472  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.151482  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.151486  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.154628  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:36.155704  780633 pod_ready.go:98] node "ha-688277" hosting pod "kube-apiserver-ha-688277" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:36.155734  780633 pod_ready.go:82] duration metric: took 9.288775ms for pod "kube-apiserver-ha-688277" in "kube-system" namespace to be "Ready" ...
	E0920 19:59:36.155745  780633 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-688277" hosting pod "kube-apiserver-ha-688277" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:36.155753  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:36.303070  780633 request.go:632] Waited for 147.210025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-688277-m02
	I0920 19:59:36.303151  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-688277-m02
	I0920 19:59:36.303163  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.303172  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.303180  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.306233  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:36.502408  780633 request.go:632] Waited for 195.194042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:59:36.502482  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:59:36.502489  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.502498  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.502508  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.505616  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:36.506260  780633 pod_ready.go:93] pod "kube-apiserver-ha-688277-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 19:59:36.506285  780633 pod_ready.go:82] duration metric: took 350.520798ms for pod "kube-apiserver-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:36.506299  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-688277" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:36.702912  780633 request.go:632] Waited for 196.516639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-688277
	I0920 19:59:36.702981  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-688277
	I0920 19:59:36.702995  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.703006  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.703018  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.706256  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:36.902488  780633 request.go:632] Waited for 195.382814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:36.902598  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:36.902612  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:36.902625  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:36.902633  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:36.906252  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:36.907089  780633 pod_ready.go:98] node "ha-688277" hosting pod "kube-controller-manager-ha-688277" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:36.907163  780633 pod_ready.go:82] duration metric: took 400.85404ms for pod "kube-controller-manager-ha-688277" in "kube-system" namespace to be "Ready" ...
	E0920 19:59:36.907197  780633 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-688277" hosting pod "kube-controller-manager-ha-688277" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:36.907218  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:37.102847  780633 request.go:632] Waited for 195.533834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-688277-m02
	I0920 19:59:37.102989  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-688277-m02
	I0920 19:59:37.103001  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:37.103011  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:37.103016  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:37.106869  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:37.302886  780633 request.go:632] Waited for 195.140923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:59:37.302949  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:59:37.302958  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:37.302967  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:37.302977  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:37.306188  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:37.306866  780633 pod_ready.go:93] pod "kube-controller-manager-ha-688277-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 19:59:37.306889  780633 pod_ready.go:82] duration metric: took 399.647149ms for pod "kube-controller-manager-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:37.306903  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-596wf" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:37.502243  780633 request.go:632] Waited for 195.152353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-596wf
	I0920 19:59:37.502325  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-596wf
	I0920 19:59:37.502332  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:37.502340  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:37.502346  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:37.508142  780633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 19:59:37.702681  780633 request.go:632] Waited for 193.67236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:37.702796  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m04
	I0920 19:59:37.702815  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:37.702936  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:37.702948  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:37.706493  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:37.707286  780633 pod_ready.go:93] pod "kube-proxy-596wf" in "kube-system" namespace has status "Ready":"True"
	I0920 19:59:37.707328  780633 pod_ready.go:82] duration metric: took 400.416921ms for pod "kube-proxy-596wf" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:37.707341  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-czqf2" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:37.902794  780633 request.go:632] Waited for 195.312981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-czqf2
	I0920 19:59:37.902856  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-czqf2
	I0920 19:59:37.902862  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:37.902871  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:37.902879  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:37.905958  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:38.102311  780633 request.go:632] Waited for 195.327438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:59:38.102372  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:59:38.102378  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:38.102393  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:38.102399  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:38.106090  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:38.106819  780633 pod_ready.go:93] pod "kube-proxy-czqf2" in "kube-system" namespace has status "Ready":"True"
	I0920 19:59:38.106870  780633 pod_ready.go:82] duration metric: took 399.485306ms for pod "kube-proxy-czqf2" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:38.106890  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l769r" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:38.302716  780633 request.go:632] Waited for 195.72354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l769r
	I0920 19:59:38.302790  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l769r
	I0920 19:59:38.302799  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:38.302808  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:38.302813  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:38.305885  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:38.502957  780633 request.go:632] Waited for 196.241469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:38.503021  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:38.503028  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:38.503037  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:38.503046  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:38.506068  780633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 19:59:38.506705  780633 pod_ready.go:98] node "ha-688277" hosting pod "kube-proxy-l769r" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:38.506733  780633 pod_ready.go:82] duration metric: took 399.823884ms for pod "kube-proxy-l769r" in "kube-system" namespace to be "Ready" ...
	E0920 19:59:38.506744  780633 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-688277" hosting pod "kube-proxy-l769r" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:38.506752  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-688277" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:38.702219  780633 request.go:632] Waited for 195.397812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-688277
	I0920 19:59:38.702347  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-688277
	I0920 19:59:38.702359  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:38.702368  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:38.702375  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:38.726341  780633 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0920 19:59:38.902500  780633 request.go:632] Waited for 175.313431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:38.902578  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277
	I0920 19:59:38.902589  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:38.902639  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:38.902650  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:38.908966  780633 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 19:59:38.910245  780633 pod_ready.go:98] node "ha-688277" hosting pod "kube-scheduler-ha-688277" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:38.910280  780633 pod_ready.go:82] duration metric: took 403.520583ms for pod "kube-scheduler-ha-688277" in "kube-system" namespace to be "Ready" ...
	E0920 19:59:38.910303  780633 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-688277" hosting pod "kube-scheduler-ha-688277" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-688277" has status "Ready":"Unknown"
	I0920 19:59:38.910311  780633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:39.102972  780633 request.go:632] Waited for 192.58202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-688277-m02
	I0920 19:59:39.103141  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-688277-m02
	I0920 19:59:39.103152  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:39.103168  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:39.103174  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:39.107701  780633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 19:59:39.302828  780633 request.go:632] Waited for 194.358244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:59:39.302905  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-688277-m02
	I0920 19:59:39.302916  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:39.302925  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:39.302934  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:39.306016  780633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 19:59:39.306853  780633 pod_ready.go:93] pod "kube-scheduler-ha-688277-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 19:59:39.306880  780633 pod_ready.go:82] duration metric: took 396.55508ms for pod "kube-scheduler-ha-688277-m02" in "kube-system" namespace to be "Ready" ...
	I0920 19:59:39.306895  780633 pod_ready.go:39] duration metric: took 22.72065047s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:59:39.306917  780633 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:59:39.306987  780633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:59:39.319385  780633 system_svc.go:56] duration metric: took 12.44755ms WaitForService to wait for kubelet
	I0920 19:59:39.319438  780633 kubeadm.go:582] duration metric: took 29.891371448s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:59:39.319460  780633 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:59:39.502858  780633 request.go:632] Waited for 183.319115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0920 19:59:39.502957  780633 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0920 19:59:39.502975  780633 round_trippers.go:469] Request Headers:
	I0920 19:59:39.502985  780633 round_trippers.go:473]     Accept: application/json, */*
	I0920 19:59:39.502990  780633 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0920 19:59:39.507135  780633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 19:59:39.508966  780633 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 19:59:39.509012  780633 node_conditions.go:123] node cpu capacity is 2
	I0920 19:59:39.509033  780633 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 19:59:39.509094  780633 node_conditions.go:123] node cpu capacity is 2
	I0920 19:59:39.509113  780633 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 19:59:39.509125  780633 node_conditions.go:123] node cpu capacity is 2
	I0920 19:59:39.509131  780633 node_conditions.go:105] duration metric: took 189.664578ms to run NodePressure ...
	I0920 19:59:39.509147  780633 start.go:241] waiting for startup goroutines ...
	I0920 19:59:39.509203  780633 start.go:255] writing updated cluster config ...
	I0920 19:59:39.509754  780633 ssh_runner.go:195] Run: rm -f paused
	I0920 19:59:39.593397  780633 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:59:39.598134  780633 out.go:177] * Done! kubectl is now configured to use "ha-688277" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 19:58:55 ha-688277 crio[644]: time="2024-09-20 19:58:55.304445960Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/35c355350cfde8d20904c05ad0f26c4dae4e76c411a10731525360f9adbf8f47/merged/etc/group: no such file or directory"
	Sep 20 19:58:55 ha-688277 crio[644]: time="2024-09-20 19:58:55.346768449Z" level=info msg="Created container ec3382c770f69807c2ff808bf839b33a4845ed771a289322de27c8003a864adb: kube-system/storage-provisioner/storage-provisioner" id=4e07c535-b120-45f4-a834-5091ab2aa0de name=/runtime.v1.RuntimeService/CreateContainer
	Sep 20 19:58:55 ha-688277 crio[644]: time="2024-09-20 19:58:55.347301680Z" level=info msg="Starting container: ec3382c770f69807c2ff808bf839b33a4845ed771a289322de27c8003a864adb" id=ea0ff3e3-49cf-4d1c-a81f-62c738f8926d name=/runtime.v1.RuntimeService/StartContainer
	Sep 20 19:58:55 ha-688277 crio[644]: time="2024-09-20 19:58:55.353707179Z" level=info msg="Started container" PID=1843 containerID=ec3382c770f69807c2ff808bf839b33a4845ed771a289322de27c8003a864adb description=kube-system/storage-provisioner/storage-provisioner id=ea0ff3e3-49cf-4d1c-a81f-62c738f8926d name=/runtime.v1.RuntimeService/StartContainer sandboxID=461e63ff8f54b280d14f6dc5cd96c9d176737dafc60267441c4331f5cc75e54a
	Sep 20 19:59:04 ha-688277 crio[644]: time="2024-09-20 19:59:04.993744827Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 20 19:59:04 ha-688277 crio[644]: time="2024-09-20 19:59:04.999158970Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 20 19:59:04 ha-688277 crio[644]: time="2024-09-20 19:59:04.999195342Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 20 19:59:04 ha-688277 crio[644]: time="2024-09-20 19:59:04.999213467Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 20 19:59:05 ha-688277 crio[644]: time="2024-09-20 19:59:05.010796068Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 20 19:59:05 ha-688277 crio[644]: time="2024-09-20 19:59:05.010846085Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 20 19:59:05 ha-688277 crio[644]: time="2024-09-20 19:59:05.010865350Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 20 19:59:05 ha-688277 crio[644]: time="2024-09-20 19:59:05.027995876Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 20 19:59:05 ha-688277 crio[644]: time="2024-09-20 19:59:05.028035276Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 20 19:59:05 ha-688277 crio[644]: time="2024-09-20 19:59:05.028053844Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 20 19:59:05 ha-688277 crio[644]: time="2024-09-20 19:59:05.033673693Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 20 19:59:05 ha-688277 crio[644]: time="2024-09-20 19:59:05.033720970Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 20 19:59:07 ha-688277 crio[644]: time="2024-09-20 19:59:07.991580809Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=8bf1f6fb-79a1-4951-94a1-6f1e0ba4e28b name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:59:07 ha-688277 crio[644]: time="2024-09-20 19:59:07.991809153Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=8bf1f6fb-79a1-4951-94a1-6f1e0ba4e28b name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:59:07 ha-688277 crio[644]: time="2024-09-20 19:59:07.993885408Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=de7821f7-2188-4cdc-aaa1-4d51e7edfcbf name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:59:07 ha-688277 crio[644]: time="2024-09-20 19:59:07.994133919Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=de7821f7-2188-4cdc-aaa1-4d51e7edfcbf name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:59:07 ha-688277 crio[644]: time="2024-09-20 19:59:07.995329093Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-688277/kube-controller-manager" id=30950d6d-d9c7-41be-a736-9d61496f25f8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 20 19:59:07 ha-688277 crio[644]: time="2024-09-20 19:59:07.995437029Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 20 19:59:08 ha-688277 crio[644]: time="2024-09-20 19:59:08.120412497Z" level=info msg="Created container cc15d39963bb600216966876c081d379ddf6abb4f539cc139269cade08da4e5e: kube-system/kube-controller-manager-ha-688277/kube-controller-manager" id=30950d6d-d9c7-41be-a736-9d61496f25f8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 20 19:59:08 ha-688277 crio[644]: time="2024-09-20 19:59:08.121264884Z" level=info msg="Starting container: cc15d39963bb600216966876c081d379ddf6abb4f539cc139269cade08da4e5e" id=caa955e6-b035-46ff-8fc2-a33424f40468 name=/runtime.v1.RuntimeService/StartContainer
	Sep 20 19:59:08 ha-688277 crio[644]: time="2024-09-20 19:59:08.148464082Z" level=info msg="Started container" PID=1924 containerID=cc15d39963bb600216966876c081d379ddf6abb4f539cc139269cade08da4e5e description=kube-system/kube-controller-manager-ha-688277/kube-controller-manager id=caa955e6-b035-46ff-8fc2-a33424f40468 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fd5d0934180aacf895f7197990b4f7c8452fdf1ad2658e311a05d9760a01110
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cc15d39963bb6       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   34 seconds ago       Running             kube-controller-manager   8                   5fd5d0934180a       kube-controller-manager-ha-688277
	ec3382c770f69       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   46 seconds ago       Running             storage-provisioner       5                   461e63ff8f54b       storage-provisioner
	25c817b4be049       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   47 seconds ago       Running             kube-vip                  3                   b7d5efecf04c7       kube-vip-ha-688277
	22e325005b87e       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   52 seconds ago       Running             kube-apiserver            4                   d4e95fde84607       kube-apiserver-ha-688277
	51da11d827292       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   ae715a82c79ac       coredns-7c65d6cfc9-f5x4v
	90bc17b5e1a27       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   0489f5fba5509       coredns-7c65d6cfc9-srdh5
	6ff972542ca0a       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d   About a minute ago   Running             kube-proxy                2                   b18e5f752344a       kube-proxy-l769r
	c2535931bcfdf       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       4                   461e63ff8f54b       storage-provisioner
	42583c31125fe       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   c8060cd683f7c       busybox-7dff88458-b4p5n
	b87463fa2dfe7       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   About a minute ago   Running             kindnet-cni               2                   f8036424af0ef       kindnet-h85n4
	b400a71d1bcab       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   About a minute ago   Exited              kube-controller-manager   7                   5fd5d0934180a       kube-controller-manager-ha-688277
	cef86b6ed3a0e       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   2 minutes ago        Exited              kube-apiserver            3                   d4e95fde84607       kube-apiserver-ha-688277
	27f0fc41a6387       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d   2 minutes ago        Running             kube-scheduler            2                   a709af7309be4       kube-scheduler-ha-688277
	4d61684d5d593       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   2 minutes ago        Exited              kube-vip                  2                   b7d5efecf04c7       kube-vip-ha-688277
	f984ab13f35de       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   2 minutes ago        Running             etcd                      2                   b454be8d5be0b       etcd-ha-688277
	
	
	==> coredns [51da11d82729209cae9efb30aa7d21b39be24e7ad30aa53f9e367d6df9ed36a1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34721 - 54670 "HINFO IN 6436862764793469759.4959348658312853637. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01633481s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1878022782]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 19:58:25.091) (total time: 30001ms):
	Trace[1878022782]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:58:55.091)
	Trace[1878022782]: [30.001333987s] [30.001333987s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[782346128]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 19:58:25.087) (total time: 30011ms):
	Trace[782346128]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (19:58:55.092)
	Trace[782346128]: [30.011043184s] [30.011043184s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1182084550]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 19:58:25.090) (total time: 30008ms):
	Trace[1182084550]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:58:55.092)
	Trace[1182084550]: [30.008069521s] [30.008069521s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [90bc17b5e1a278355b18f549e244aa9d7f902fc75f55b9561f1467c546a10785] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39410 - 64459 "HINFO IN 3820835834272790750.9076683611083740241. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020457445s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[667868243]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 19:58:25.138) (total time: 30003ms):
	Trace[667868243]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:58:55.142)
	Trace[667868243]: [30.003460094s] [30.003460094s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1321042859]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 19:58:25.148) (total time: 30001ms):
	Trace[1321042859]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:58:55.149)
	Trace[1321042859]: [30.001458052s] [30.001458052s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[279925681]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 19:58:25.138) (total time: 30015ms):
	Trace[279925681]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30015ms (19:58:55.154)
	Trace[279925681]: [30.015887574s] [30.015887574s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-688277
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-688277
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-688277
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T19_49_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:48:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-688277
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:59:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 19:58:12 +0000   Fri, 20 Sep 2024 19:59:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 19:58:12 +0000   Fri, 20 Sep 2024 19:59:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 19:58:12 +0000   Fri, 20 Sep 2024 19:59:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 19:58:12 +0000   Fri, 20 Sep 2024 19:59:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-688277
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c08429dbcdcc49c0b946f3bba4de1560
	  System UUID:                d2e21379-863c-4863-aa72-14e6fff49fa2
	  Boot ID:                    7d682649-b07c-44b5-a0a6-3c50df538ea4
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-b4p5n              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 coredns-7c65d6cfc9-f5x4v             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 coredns-7c65d6cfc9-srdh5             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-ha-688277                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-h85n4                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-688277             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-688277    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-l769r                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-688277             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-688277                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 76s                    kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 4m58s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-688277 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-688277 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-688277 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node ha-688277 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node ha-688277 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node ha-688277 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-688277 event: Registered Node ha-688277 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-688277 event: Registered Node ha-688277 in Controller
	  Normal   NodeReady                9m57s                  kubelet          Node ha-688277 status is now: NodeReady
	  Normal   RegisteredNode           8m56s                  node-controller  Node ha-688277 event: Registered Node ha-688277 in Controller
	  Normal   Starting                 5m45s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m45s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    5m44s (x8 over 5m44s)  kubelet          Node ha-688277 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  5m44s (x8 over 5m44s)  kubelet          Node ha-688277 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     5m44s (x7 over 5m44s)  kubelet          Node ha-688277 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m7s                   node-controller  Node ha-688277 event: Registered Node ha-688277 in Controller
	  Normal   RegisteredNode           4m5s                   node-controller  Node ha-688277 event: Registered Node ha-688277 in Controller
	  Normal   RegisteredNode           3m40s                  node-controller  Node ha-688277 event: Registered Node ha-688277 in Controller
	  Normal   Starting                 2m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m8s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m7s (x8 over 2m8s)    kubelet          Node ha-688277 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m7s (x8 over 2m8s)    kubelet          Node ha-688277 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m7s (x7 over 2m8s)    kubelet          Node ha-688277 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           87s                    node-controller  Node ha-688277 event: Registered Node ha-688277 in Controller
	  Normal   RegisteredNode           30s                    node-controller  Node ha-688277 event: Registered Node ha-688277 in Controller
	  Normal   NodeNotReady             7s                     node-controller  Node ha-688277 status is now: NodeNotReady
	
	
	Name:               ha-688277-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-688277-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-688277
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T19_49_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:49:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-688277-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:59:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:58:15 +0000   Fri, 20 Sep 2024 19:49:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:58:15 +0000   Fri, 20 Sep 2024 19:49:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:58:15 +0000   Fri, 20 Sep 2024 19:49:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:58:15 +0000   Fri, 20 Sep 2024 19:50:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-688277-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ab4565cc2c4843d0b22ee341aefe2a3a
	  System UUID:                78dd2024-ea8b-449a-8ff3-99c2c18d57c2
	  Boot ID:                    7d682649-b07c-44b5-a0a6-3c50df538ea4
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rx7lk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 etcd-ha-688277-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-d4b7m                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-688277-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-688277-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-czqf2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-688277-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-688277-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m18s                  kube-proxy       
	  Normal   Starting                 4m53s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 62s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-688277-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-688277-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-688277-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-688277-m02 event: Registered Node ha-688277-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-688277-m02 event: Registered Node ha-688277-m02 in Controller
	  Normal   RegisteredNode           8m56s                  node-controller  Node ha-688277-m02 event: Registered Node ha-688277-m02 in Controller
	  Normal   NodeHasSufficientPID     6m51s (x7 over 6m51s)  kubelet          Node ha-688277-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m51s (x8 over 6m51s)  kubelet          Node ha-688277-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m51s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m51s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m51s (x8 over 6m51s)  kubelet          Node ha-688277-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m42s (x8 over 5m42s)  kubelet          Node ha-688277-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m42s (x7 over 5m42s)  kubelet          Node ha-688277-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  5m42s (x8 over 5m42s)  kubelet          Node ha-688277-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 5m42s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m42s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           5m7s                   node-controller  Node ha-688277-m02 event: Registered Node ha-688277-m02 in Controller
	  Normal   RegisteredNode           4m5s                   node-controller  Node ha-688277-m02 event: Registered Node ha-688277-m02 in Controller
	  Normal   RegisteredNode           3m40s                  node-controller  Node ha-688277-m02 event: Registered Node ha-688277-m02 in Controller
	  Normal   Starting                 2m5s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m5s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m5s (x8 over 2m5s)    kubelet          Node ha-688277-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m5s (x8 over 2m5s)    kubelet          Node ha-688277-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m5s (x7 over 2m5s)    kubelet          Node ha-688277-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           87s                    node-controller  Node ha-688277-m02 event: Registered Node ha-688277-m02 in Controller
	  Normal   RegisteredNode           30s                    node-controller  Node ha-688277-m02 event: Registered Node ha-688277-m02 in Controller
	
	
	Name:               ha-688277-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-688277-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=ha-688277
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T19_51_55_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:51:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-688277-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:59:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:59:16 +0000   Fri, 20 Sep 2024 19:59:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:59:16 +0000   Fri, 20 Sep 2024 19:59:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:59:16 +0000   Fri, 20 Sep 2024 19:59:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:59:16 +0000   Fri, 20 Sep 2024 19:59:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-688277-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0820c0c0c9ff43e1aa4232f9ba1072d2
	  System UUID:                32ee773d-1b69-42ab-908f-545154c0afcd
	  Boot ID:                    7d682649-b07c-44b5-a0a6-3c50df538ea4
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-42vgs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  kube-system                 kindnet-6xnsl              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m48s
	  kube-system                 kube-proxy-596wf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m45s                  kube-proxy       
	  Normal   Starting                 7s                     kube-proxy       
	  Normal   Starting                 3m9s                   kube-proxy       
	  Normal   NodeHasSufficientPID     7m48s (x2 over 7m48s)  kubelet          Node ha-688277-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m48s (x2 over 7m48s)  kubelet          Node ha-688277-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  7m48s (x2 over 7m48s)  kubelet          Node ha-688277-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           7m46s                  node-controller  Node ha-688277-m04 event: Registered Node ha-688277-m04 in Controller
	  Normal   RegisteredNode           7m46s                  node-controller  Node ha-688277-m04 event: Registered Node ha-688277-m04 in Controller
	  Normal   RegisteredNode           7m44s                  node-controller  Node ha-688277-m04 event: Registered Node ha-688277-m04 in Controller
	  Normal   NodeReady                7m35s                  kubelet          Node ha-688277-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m7s                   node-controller  Node ha-688277-m04 event: Registered Node ha-688277-m04 in Controller
	  Normal   NodeNotReady             4m27s                  node-controller  Node ha-688277-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           4m5s                   node-controller  Node ha-688277-m04 event: Registered Node ha-688277-m04 in Controller
	  Normal   RegisteredNode           3m40s                  node-controller  Node ha-688277-m04 event: Registered Node ha-688277-m04 in Controller
	  Normal   Starting                 3m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m26s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     3m20s (x7 over 3m26s)  kubelet          Node ha-688277-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  3m14s (x8 over 3m26s)  kubelet          Node ha-688277-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m14s (x8 over 3m26s)  kubelet          Node ha-688277-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           87s                    node-controller  Node ha-688277-m04 event: Registered Node ha-688277-m04 in Controller
	  Normal   NodeNotReady             47s                    node-controller  Node ha-688277-m04 status is now: NodeNotReady
	  Normal   Starting                 39s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 39s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     33s (x7 over 39s)      kubelet          Node ha-688277-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                    node-controller  Node ha-688277-m04 event: Registered Node ha-688277-m04 in Controller
	  Normal   NodeHasSufficientMemory  26s (x8 over 39s)      kubelet          Node ha-688277-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26s (x8 over 39s)      kubelet          Node ha-688277-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[Sep20 18:56] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	[Sep20 19:09] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	[Sep20 19:16] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	
	
	==> etcd [f984ab13f35de17066368ce3088ba8ddb7e64020ed28983d0d0eddc10f73fa81] <==
	{"level":"warn","ts":"2024-09-20T19:58:05.823055Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:57:59.604179Z","time spent":"6.218871413s","remote":"127.0.0.1:49736","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":0,"response size":0,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:500 "}
	{"level":"warn","ts":"2024-09-20T19:58:05.823075Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.225685854s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-20T19:58:05.823086Z","caller":"traceutil/trace.go:171","msg":"trace[1380798169] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; }","duration":"6.225698088s","start":"2024-09-20T19:57:59.597384Z","end":"2024-09-20T19:58:05.823082Z","steps":["trace[1380798169] 'agreement among raft nodes before linearized reading'  (duration: 6.225685814s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:58:05.823101Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:57:59.597342Z","time spent":"6.225751814s","remote":"127.0.0.1:49586","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:500 "}
	{"level":"warn","ts":"2024-09-20T19:58:05.823133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.24285163s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-20T19:58:05.823144Z","caller":"traceutil/trace.go:171","msg":"trace[138221890] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; }","duration":"6.242864356s","start":"2024-09-20T19:57:59.580276Z","end":"2024-09-20T19:58:05.823141Z","steps":["trace[138221890] 'agreement among raft nodes before linearized reading'  (duration: 6.242851302s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:58:05.823162Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:57:59.580224Z","time spent":"6.242932925s","remote":"127.0.0.1:49438","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:500 "}
	{"level":"warn","ts":"2024-09-20T19:58:05.823182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.272473038s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-20T19:58:05.823199Z","caller":"traceutil/trace.go:171","msg":"trace[331330687] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; }","duration":"6.27249081s","start":"2024-09-20T19:57:59.550704Z","end":"2024-09-20T19:58:05.823195Z","steps":["trace[331330687] 'agreement among raft nodes before linearized reading'  (duration: 6.272472841s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:58:05.823212Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:57:59.550665Z","time spent":"6.272543412s","remote":"127.0.0.1:49382","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" limit:500 "}
	{"level":"warn","ts":"2024-09-20T19:58:05.823263Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.30103677s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-20T19:58:05.823283Z","caller":"traceutil/trace.go:171","msg":"trace[518160310] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; }","duration":"6.301057668s","start":"2024-09-20T19:57:59.522221Z","end":"2024-09-20T19:58:05.823279Z","steps":["trace[518160310] 'agreement among raft nodes before linearized reading'  (duration: 6.301036622s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:58:05.823299Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:57:59.522165Z","time spent":"6.301128215s","remote":"127.0.0.1:49596","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:500 "}
	{"level":"warn","ts":"2024-09-20T19:58:05.823320Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.30772735s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-20T19:58:05.823331Z","caller":"traceutil/trace.go:171","msg":"trace[1706013023] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; }","duration":"6.307739395s","start":"2024-09-20T19:57:59.515588Z","end":"2024-09-20T19:58:05.823327Z","steps":["trace[1706013023] 'agreement among raft nodes before linearized reading'  (duration: 6.307727087s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:58:05.823343Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:57:59.515550Z","time spent":"6.307789223s","remote":"127.0.0.1:49408","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" limit:500 "}
	{"level":"warn","ts":"2024-09-20T19:58:05.823363Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.766075319s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-20T19:58:05.823373Z","caller":"traceutil/trace.go:171","msg":"trace[571267429] range","detail":"{range_begin:/registry/clusterroles; range_end:; }","duration":"6.766088054s","start":"2024-09-20T19:57:59.057282Z","end":"2024-09-20T19:58:05.823370Z","steps":["trace[571267429] 'agreement among raft nodes before linearized reading'  (duration: 6.766075156s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:58:05.823385Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:57:59.057253Z","time spent":"6.766128833s","remote":"127.0.0.1:49578","response type":"/etcdserverpb.KV/Range","request count":0,"request size":26,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles\" limit:1 "}
	{"level":"warn","ts":"2024-09-20T19:58:05.823402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.766134355s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-20T19:58:05.823418Z","caller":"traceutil/trace.go:171","msg":"trace[900642737] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; }","duration":"6.766150255s","start":"2024-09-20T19:57:59.057264Z","end":"2024-09-20T19:58:05.823415Z","steps":["trace[900642737] 'agreement among raft nodes before linearized reading'  (duration: 6.76613451s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:58:05.823432Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:57:59.057226Z","time spent":"6.766201093s","remote":"127.0.0.1:49586","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"warn","ts":"2024-09-20T19:58:05.823453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.918316408s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/apiserver-uduuklyzcx26d5lhszecjlak5u\" ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-20T19:58:05.823464Z","caller":"traceutil/trace.go:171","msg":"trace[2112972597] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-uduuklyzcx26d5lhszecjlak5u; range_end:; }","duration":"6.918328405s","start":"2024-09-20T19:57:58.905132Z","end":"2024-09-20T19:58:05.823460Z","steps":["trace[2112972597] 'agreement among raft nodes before linearized reading'  (duration: 6.918317172s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T19:58:05.823488Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T19:57:58.905081Z","time spent":"6.918402749s","remote":"127.0.0.1:49500","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/leases/kube-system/apiserver-uduuklyzcx26d5lhszecjlak5u\" "}
	
	
	==> kernel <==
	 19:59:42 up  3:42,  0 users,  load average: 1.42, 2.26, 2.05
	Linux ha-688277 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b87463fa2dfe785ff0a8046ebd6cc3121b959b0263998fb7774e68c4709ff152] <==
	I0920 19:59:04.993298       1 main.go:299] handling current node
	I0920 19:59:04.998430       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0920 19:59:04.998470       1 main.go:322] Node ha-688277-m02 has CIDR [10.244.1.0/24] 
	I0920 19:59:04.998615       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	I0920 19:59:04.998704       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0920 19:59:04.998718       1 main.go:322] Node ha-688277-m04 has CIDR [10.244.3.0/24] 
	I0920 19:59:04.998762       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0920 19:59:14.992990       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:59:14.993026       1 main.go:299] handling current node
	I0920 19:59:14.993043       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0920 19:59:14.993051       1 main.go:322] Node ha-688277-m02 has CIDR [10.244.1.0/24] 
	I0920 19:59:14.993231       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0920 19:59:14.993243       1 main.go:322] Node ha-688277-m04 has CIDR [10.244.3.0/24] 
	I0920 19:59:24.992952       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:59:24.992994       1 main.go:299] handling current node
	I0920 19:59:24.993012       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0920 19:59:24.993018       1 main.go:322] Node ha-688277-m02 has CIDR [10.244.1.0/24] 
	I0920 19:59:24.993112       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0920 19:59:24.993126       1 main.go:322] Node ha-688277-m04 has CIDR [10.244.3.0/24] 
	I0920 19:59:34.993735       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:59:34.993768       1 main.go:299] handling current node
	I0920 19:59:34.993785       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0920 19:59:34.993791       1 main.go:322] Node ha-688277-m02 has CIDR [10.244.1.0/24] 
	I0920 19:59:34.993900       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0920 19:59:34.993920       1 main.go:322] Node ha-688277-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [22e325005b87e0b9612ee086ad373af4b9fe8d1a75126c22efff2066c0fd8db7] <==
	I0920 19:58:53.452257       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0920 19:58:53.452267       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0920 19:58:53.476281       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 19:58:53.476399       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 19:58:53.515827       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 19:58:53.515850       1 policy_source.go:224] refreshing policies
	I0920 19:58:53.554292       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 19:58:53.554724       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 19:58:53.554825       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 19:58:53.554874       1 aggregator.go:171] initial CRD sync complete...
	I0920 19:58:53.554907       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 19:58:53.554936       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 19:58:53.566514       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 19:58:53.589058       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 19:58:53.639209       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 19:58:53.640488       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 19:58:53.640507       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 19:58:53.651954       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 19:58:53.653600       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 19:58:53.655989       1 cache.go:39] Caches are synced for autoregister controller
	I0920 19:58:53.663255       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 19:58:54.452417       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0920 19:58:55.069025       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0920 19:58:55.071901       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 19:58:55.087804       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [cef86b6ed3a0e26fa6e4866fc0693e569533081b90800387de97d7cc624483ff] <==
	W0920 19:58:05.867531       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Role: etcdserver: leader changed
	E0920 19:58:05.867574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Role: failed to list *v1.Role: etcdserver: leader changed" logger="UnhandledError"
	I0920 19:58:06.061422       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 19:58:07.755477       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 19:58:07.755560       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 19:58:07.854358       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 19:58:07.931270       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 19:58:07.936971       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 19:58:08.057290       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 19:58:08.057465       1 aggregator.go:171] initial CRD sync complete...
	I0920 19:58:08.057514       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 19:58:08.057548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 19:58:08.057596       1 cache.go:39] Caches are synced for autoregister controller
	I0920 19:58:08.554304       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 19:58:08.554445       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 19:58:08.554422       1 cache.go:39] Caches are synced for RemoteAvailability controller
	W0920 19:58:08.577512       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0920 19:58:08.957347       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 19:58:08.973079       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 19:58:08.973174       1 policy_source.go:224] refreshing policies
	I0920 19:58:08.983608       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 19:58:08.983915       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 19:58:08.996938       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0920 19:58:09.006827       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	F0920 19:58:48.049000       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [b400a71d1bcab9e55fb8fe048e5ae9733ab1490f0ace4ca25ccf74a8306dd299] <==
	I0920 19:58:26.436031       1 serving.go:386] Generated self-signed cert in-memory
	I0920 19:58:27.212881       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0920 19:58:27.212982       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:58:27.214521       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 19:58:27.214680       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0920 19:58:27.215253       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0920 19:58:27.215327       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0920 19:58:37.249408       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [cc15d39963bb600216966876c081d379ddf6abb4f539cc139269cade08da4e5e] <==
	I0920 19:59:13.263371       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 19:59:13.299219       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 19:59:16.092158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-688277-m04"
	I0920 19:59:16.092497       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-688277-m04"
	I0920 19:59:16.114324       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-688277-m04"
	I0920 19:59:17.738195       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-688277-m04"
	I0920 19:59:19.258229       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.03µs"
	I0920 19:59:20.315286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="74.263µs"
	I0920 19:59:34.453849       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="107.778864ms"
	I0920 19:59:34.454079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="98.829µs"
	I0920 19:59:35.865302       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-688277"
	I0920 19:59:35.865736       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-688277-m04"
	I0920 19:59:35.890691       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-688277"
	I0920 19:59:36.068413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.789111ms"
	I0920 19:59:36.070693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.399µs"
	I0920 19:59:37.809245       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-688277"
	I0920 19:59:38.671260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.479293ms"
	I0920 19:59:38.671426       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="96.539µs"
	I0920 19:59:38.731442       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-wk6t8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-wk6t8\": the object has been modified; please apply your changes to the latest version and try again"
	I0920 19:59:38.731817       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"b5971b6d-1ef6-4976-9f44-2d4da6e9c233", APIVersion:"v1", ResourceVersion:"286", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-wk6t8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-wk6t8": the object has been modified; please apply your changes to the latest version and try again
	I0920 19:59:38.823929       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-wk6t8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-wk6t8\": the object has been modified; please apply your changes to the latest version and try again"
	I0920 19:59:38.824672       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"b5971b6d-1ef6-4976-9f44-2d4da6e9c233", APIVersion:"v1", ResourceVersion:"286", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-wk6t8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-wk6t8": the object has been modified; please apply your changes to the latest version and try again
	I0920 19:59:38.858213       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.315471ms"
	I0920 19:59:38.858636       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="181.838µs"
	I0920 19:59:41.090146       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-688277"
	
	
	==> kube-proxy [6ff972542ca0abcd9120bc3cf4ddc754800538d04d0b562897cb78116e7bafce] <==
	I0920 19:58:25.380899       1 server_linux.go:66] "Using iptables proxy"
	I0920 19:58:25.683499       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 19:58:25.683600       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:58:25.711523       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 19:58:25.711590       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:58:25.725909       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:58:25.726498       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:58:25.726524       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:58:25.735439       1 config.go:199] "Starting service config controller"
	I0920 19:58:25.735483       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:58:25.735513       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:58:25.735518       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:58:25.736050       1 config.go:328] "Starting node config controller"
	I0920 19:58:25.736067       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:58:25.836546       1 shared_informer.go:320] Caches are synced for node config
	I0920 19:58:25.836595       1 shared_informer.go:320] Caches are synced for service config
	I0920 19:58:25.836622       1 shared_informer.go:320] Caches are synced for endpoint slice config
	W0920 19:59:38.599936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-688277&resourceVersion=2737": http2: client connection lost
	E0920 19:59:38.600051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-688277&resourceVersion=2737\": http2: client connection lost" logger="UnhandledError"
	W0920 19:59:38.600170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2734": http2: client connection lost
	E0920 19:59:38.600196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2734\": http2: client connection lost" logger="UnhandledError"
	W0920 19:59:38.600516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2819": http2: client connection lost
	E0920 19:59:38.600593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2819\": http2: client connection lost" logger="UnhandledError"
	
	
	==> kube-scheduler [27f0fc41a63870f23487aa113d2a05fae95f8461fff7649999f5a8709497f9c5] <==
	W0920 19:57:58.221977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 19:57:58.222022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:57:58.718428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 19:57:58.718471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:58:01.839608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 19:58:01.839781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:58:02.180416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 19:58:02.180550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:58:04.261785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 19:58:04.261836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:58:04.722614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 19:58:04.722800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:58:05.550552       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:58:05.550599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:58:05.556345       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 19:58:05.556397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 19:58:05.853085       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 19:58:05.853146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:58:06.409518       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 19:58:06.409560       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 19:58:06.694411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 19:58:06.694455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:58:06.725011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 19:58:06.725063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0920 19:58:30.748428       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:59:15 ha-688277 kubelet[759]: E0920 19:59:15.048929     759 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862355048534148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:59:15 ha-688277 kubelet[759]: E0920 19:59:15.048970     759 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862355048534148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:59:23 ha-688277 kubelet[759]: E0920 19:59:23.818481     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-688277?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 20 19:59:25 ha-688277 kubelet[759]: E0920 19:59:25.051282     759 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862365051022816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:59:25 ha-688277 kubelet[759]: E0920 19:59:25.051321     759 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862365051022816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:59:33 ha-688277 kubelet[759]: E0920 19:59:33.818753     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-688277?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 20 19:59:35 ha-688277 kubelet[759]: E0920 19:59:35.053208     759 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862375052920032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:59:35 ha-688277 kubelet[759]: E0920 19:59:35.053244     759 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726862375052920032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:59:38 ha-688277 kubelet[759]: E0920 19:59:38.598507     759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2677&timeout=7m44s&timeoutSeconds=464&watch=true\": http2: client connection lost" logger="UnhandledError"
	Sep 20 19:59:38 ha-688277 kubelet[759]: W0920 19:59:38.598514     759 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2735": http2: client connection lost
	Sep 20 19:59:38 ha-688277 kubelet[759]: E0920 19:59:38.598574     759 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2735\": http2: client connection lost" logger="UnhandledError"
	Sep 20 19:59:38 ha-688277 kubelet[759]: E0920 19:59:38.598596     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-688277?timeout=10s\": http2: client connection lost"
	Sep 20 19:59:38 ha-688277 kubelet[759]: W0920 19:59:38.598623     759 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2735": http2: client connection lost
	Sep 20 19:59:38 ha-688277 kubelet[759]: E0920 19:59:38.598650     759 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2735\": http2: client connection lost" logger="UnhandledError"
	Sep 20 19:59:38 ha-688277 kubelet[759]: E0920 19:59:38.598696     759 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-688277.17f70c13e2540dd3\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-ha-688277.17f70c13e2540dd3  kube-system   2709 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-688277,UID:9d1815d09fcf2b979e0759deb2900d9a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.1\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-688277,},FirstTimestamp:2024-09-20 19:57:41 +0000 UTC,LastTimestamp:2024-09-20 19:58:49.264930837 +0000 UTC m=+74.479606119,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Act
ion:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-688277,}"
	Sep 20 19:59:38 ha-688277 kubelet[759]: E0920 19:59:38.598824     759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dha-688277&resourceVersion=2737&timeout=8m6s&timeoutSeconds=486&watch=true\": http2: client connection lost" logger="UnhandledError"
	Sep 20 19:59:38 ha-688277 kubelet[759]: E0920 19:59:38.598867     759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=2678&timeout=6m27s&timeoutSeconds=387&watch=true\": http2: client connection lost" logger="UnhandledError"
	Sep 20 19:59:38 ha-688277 kubelet[759]: W0920 19:59:38.598910     759 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-688277&resourceVersion=2871": http2: client connection lost
	Sep 20 19:59:38 ha-688277 kubelet[759]: E0920 19:59:38.598939     759 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-688277&resourceVersion=2871\": http2: client connection lost" logger="UnhandledError"
	Sep 20 19:59:38 ha-688277 kubelet[759]: W0920 19:59:38.598979     759 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2735": http2: client connection lost
	Sep 20 19:59:38 ha-688277 kubelet[759]: E0920 19:59:38.599009     759 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2735\": http2: client connection lost" logger="UnhandledError"
	Sep 20 19:59:38 ha-688277 kubelet[759]: E0920 19:59:38.599053     759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?allowWatchBookmarks=true&resourceVersion=2734&timeout=5m15s&timeoutSeconds=315&watch=true\": http2: client connection lost" logger="UnhandledError"
	Sep 20 19:59:38 ha-688277 kubelet[759]: I0920 19:59:38.599108     759 status_manager.go:851] "Failed to get status for pod" podUID="276279f77cb77c815b827f55ae46ada9" pod="kube-system/kube-vip-ha-688277" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-688277\": http2: client connection lost"
	Sep 20 19:59:38 ha-688277 kubelet[759]: W0920 19:59:38.599413     759 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2735": http2: client connection lost
	Sep 20 19:59:38 ha-688277 kubelet[759]: E0920 19:59:38.599459     759 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2735\": http2: client connection lost" logger="UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-688277 -n ha-688277
helpers_test.go:261: (dbg) Run:  kubectl --context ha-688277 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (137.04s)

                                                
                                    

Test pass (294/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.81
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 8.9
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.22
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.17
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 235.77
31 TestAddons/serial/GCPAuth/Namespaces 0.22
35 TestAddons/parallel/InspektorGadget 10.85
38 TestAddons/parallel/CSI 51.58
39 TestAddons/parallel/Headlamp 17.98
40 TestAddons/parallel/CloudSpanner 6.65
41 TestAddons/parallel/LocalPath 9.43
42 TestAddons/parallel/NvidiaDevicePlugin 6.51
43 TestAddons/parallel/Yakd 10.83
44 TestAddons/StoppedEnableDisable 12.27
45 TestCertOptions 36.11
46 TestCertExpiration 243.66
48 TestForceSystemdFlag 43.02
49 TestForceSystemdEnv 44.9
55 TestErrorSpam/setup 36.3
56 TestErrorSpam/start 0.8
57 TestErrorSpam/status 1.14
58 TestErrorSpam/pause 1.94
59 TestErrorSpam/unpause 1.96
60 TestErrorSpam/stop 1.49
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 50
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 28.58
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.1
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.56
72 TestFunctional/serial/CacheCmd/cache/add_local 1.53
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
74 TestFunctional/serial/CacheCmd/cache/list 0.07
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.4
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.37
77 TestFunctional/serial/CacheCmd/cache/delete 0.15
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
80 TestFunctional/serial/ExtraConfig 34.81
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.88
83 TestFunctional/serial/LogsFileCmd 1.84
84 TestFunctional/serial/InvalidService 4.28
86 TestFunctional/parallel/ConfigCmd 0.45
87 TestFunctional/parallel/DashboardCmd 13.71
88 TestFunctional/parallel/DryRun 0.46
89 TestFunctional/parallel/InternationalLanguage 0.22
90 TestFunctional/parallel/StatusCmd 1.09
94 TestFunctional/parallel/ServiceCmdConnect 10.75
95 TestFunctional/parallel/AddonsCmd 0.23
96 TestFunctional/parallel/PersistentVolumeClaim 25.77
98 TestFunctional/parallel/SSHCmd 0.73
99 TestFunctional/parallel/CpCmd 2.38
101 TestFunctional/parallel/FileSync 0.39
102 TestFunctional/parallel/CertSync 2.15
106 TestFunctional/parallel/NodeLabels 0.11
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
110 TestFunctional/parallel/License 0.28
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.48
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 7.25
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
124 TestFunctional/parallel/ProfileCmd/profile_list 0.43
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
126 TestFunctional/parallel/MountCmd/any-port 10.18
127 TestFunctional/parallel/ServiceCmd/List 0.58
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.87
130 TestFunctional/parallel/ServiceCmd/Format 0.54
131 TestFunctional/parallel/ServiceCmd/URL 0.43
132 TestFunctional/parallel/MountCmd/specific-port 2.51
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.11
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.32
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.92
141 TestFunctional/parallel/ImageCommands/Setup 0.72
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.45
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.08
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.42
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.65
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.94
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.72
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 177.71
159 TestMultiControlPlane/serial/DeployApp 9.83
160 TestMultiControlPlane/serial/PingHostFromPods 1.8
161 TestMultiControlPlane/serial/AddWorkerNode 36.94
162 TestMultiControlPlane/serial/NodeLabels 0.14
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
164 TestMultiControlPlane/serial/CopyFile 20
165 TestMultiControlPlane/serial/StopSecondaryNode 12.81
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
167 TestMultiControlPlane/serial/RestartSecondaryNode 22.05
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.34
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 204.22
170 TestMultiControlPlane/serial/DeleteSecondaryNode 13.2
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.83
172 TestMultiControlPlane/serial/StopCluster 36.1
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.01
175 TestMultiControlPlane/serial/AddSecondaryNode 70.96
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
180 TestJSONOutput/start/Command 47.94
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.78
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 1.01
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.89
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.23
205 TestKicCustomNetwork/create_custom_network 39.25
206 TestKicCustomNetwork/use_default_bridge_network 35.35
207 TestKicExistingNetwork 35.5
208 TestKicCustomSubnet 33.79
209 TestKicStaticIP 37.15
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 72.5
214 TestMountStart/serial/StartWithMountFirst 10.97
215 TestMountStart/serial/VerifyMountFirst 0.28
216 TestMountStart/serial/StartWithMountSecond 7.27
217 TestMountStart/serial/VerifyMountSecond 0.27
218 TestMountStart/serial/DeleteFirst 1.74
219 TestMountStart/serial/VerifyMountPostDelete 0.3
220 TestMountStart/serial/Stop 1.22
221 TestMountStart/serial/RestartStopped 8.22
222 TestMountStart/serial/VerifyMountPostStop 0.27
225 TestMultiNode/serial/FreshStart2Nodes 105.81
226 TestMultiNode/serial/DeployApp2Nodes 6.82
227 TestMultiNode/serial/PingHostFrom2Pods 1.03
228 TestMultiNode/serial/AddNode 57.08
229 TestMultiNode/serial/MultiNodeLabels 0.1
230 TestMultiNode/serial/ProfileList 0.71
231 TestMultiNode/serial/CopyFile 10.36
232 TestMultiNode/serial/StopNode 2.71
233 TestMultiNode/serial/StartAfterStop 10.24
234 TestMultiNode/serial/RestartKeepsNodes 106.96
235 TestMultiNode/serial/DeleteNode 6.07
236 TestMultiNode/serial/StopMultiNode 23.95
237 TestMultiNode/serial/RestartMultiNode 63.41
238 TestMultiNode/serial/ValidateNameConflict 36.77
243 TestPreload 128.35
245 TestScheduledStopUnix 106.02
248 TestInsufficientStorage 11.2
249 TestRunningBinaryUpgrade 65.08
251 TestKubernetesUpgrade 398.2
252 TestMissingContainerUpgrade 171.49
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 43.57
256 TestNoKubernetes/serial/StartWithStopK8s 8.12
257 TestNoKubernetes/serial/Start 9.5
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
259 TestNoKubernetes/serial/ProfileList 1.2
260 TestNoKubernetes/serial/Stop 1.28
261 TestNoKubernetes/serial/StartNoArgs 8.44
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
263 TestStoppedBinaryUpgrade/Setup 1.52
264 TestStoppedBinaryUpgrade/Upgrade 78.6
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.54
274 TestPause/serial/Start 84.37
275 TestPause/serial/SecondStartNoReconfiguration 24.42
276 TestPause/serial/Pause 1.04
277 TestPause/serial/VerifyStatus 0.56
278 TestPause/serial/Unpause 1.21
279 TestPause/serial/PauseAgain 1.35
280 TestPause/serial/DeletePaused 3.85
281 TestPause/serial/VerifyDeletedResources 15.67
289 TestNetworkPlugins/group/false 5.52
294 TestStartStop/group/old-k8s-version/serial/FirstStart 162.88
295 TestStartStop/group/old-k8s-version/serial/DeployApp 10.83
297 TestStartStop/group/embed-certs/serial/FirstStart 81.7
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.84
299 TestStartStop/group/old-k8s-version/serial/Stop 12.56
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
301 TestStartStop/group/old-k8s-version/serial/SecondStart 149.39
302 TestStartStop/group/embed-certs/serial/DeployApp 10.48
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
304 TestStartStop/group/embed-certs/serial/Stop 12.03
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
306 TestStartStop/group/embed-certs/serial/SecondStart 300.55
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
310 TestStartStop/group/old-k8s-version/serial/Pause 3.1
312 TestStartStop/group/no-preload/serial/FirstStart 67.51
313 TestStartStop/group/no-preload/serial/DeployApp 10.4
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
315 TestStartStop/group/no-preload/serial/Stop 12.04
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
317 TestStartStop/group/no-preload/serial/SecondStart 282.02
318 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
320 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
321 TestStartStop/group/embed-certs/serial/Pause 3.19
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 76.99
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.39
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.21
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.95
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.31
329 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
331 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
332 TestStartStop/group/no-preload/serial/Pause 3.22
334 TestStartStop/group/newest-cni/serial/FirstStart 35.83
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
337 TestStartStop/group/newest-cni/serial/Stop 1.26
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
339 TestStartStop/group/newest-cni/serial/SecondStart 17.16
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
343 TestStartStop/group/newest-cni/serial/Pause 3.15
344 TestNetworkPlugins/group/auto/Start 77.65
345 TestNetworkPlugins/group/auto/KubeletFlags 0.29
346 TestNetworkPlugins/group/auto/NetCatPod 10.27
347 TestNetworkPlugins/group/auto/DNS 0.17
348 TestNetworkPlugins/group/auto/Localhost 0.23
349 TestNetworkPlugins/group/auto/HairPin 0.23
350 TestNetworkPlugins/group/kindnet/Start 81.43
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
353 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
354 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.23
355 TestNetworkPlugins/group/calico/Start 67.58
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.41
359 TestNetworkPlugins/group/kindnet/DNS 0.33
360 TestNetworkPlugins/group/kindnet/Localhost 0.31
361 TestNetworkPlugins/group/kindnet/HairPin 0.27
362 TestNetworkPlugins/group/custom-flannel/Start 57.79
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.3
365 TestNetworkPlugins/group/calico/NetCatPod 13.34
366 TestNetworkPlugins/group/calico/DNS 0.26
367 TestNetworkPlugins/group/calico/Localhost 0.23
368 TestNetworkPlugins/group/calico/HairPin 0.33
369 TestNetworkPlugins/group/enable-default-cni/Start 83.04
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.26
372 TestNetworkPlugins/group/custom-flannel/DNS 0.26
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
375 TestNetworkPlugins/group/flannel/Start 53.82
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.3
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
383 TestNetworkPlugins/group/flannel/NetCatPod 11.39
384 TestNetworkPlugins/group/flannel/DNS 0.26
385 TestNetworkPlugins/group/bridge/Start 82.99
386 TestNetworkPlugins/group/flannel/Localhost 0.21
387 TestNetworkPlugins/group/flannel/HairPin 0.18
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
389 TestNetworkPlugins/group/bridge/NetCatPod 10.29
390 TestNetworkPlugins/group/bridge/DNS 0.17
391 TestNetworkPlugins/group/bridge/Localhost 0.16
392 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (7.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-533694 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-533694 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.810995054s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 19:25:18.386151  719734 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0920 19:25:18.386242  719734 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-533694
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-533694: exit status 85 (68.621155ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-533694 | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |          |
	|         | -p download-only-533694        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:25:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:25:10.614829  719740 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:25:10.614957  719740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:10.614966  719740 out.go:358] Setting ErrFile to fd 2...
	I0920 19:25:10.614972  719740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:10.615234  719740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
	W0920 19:25:10.615375  719740 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19678-712952/.minikube/config/config.json: open /home/jenkins/minikube-integration/19678-712952/.minikube/config/config.json: no such file or directory
	I0920 19:25:10.615783  719740 out.go:352] Setting JSON to true
	I0920 19:25:10.616765  719740 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":11260,"bootTime":1726849051,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 19:25:10.616849  719740 start.go:139] virtualization:  
	I0920 19:25:10.619692  719740 out.go:97] [download-only-533694] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0920 19:25:10.619843  719740 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 19:25:10.619885  719740 notify.go:220] Checking for updates...
	I0920 19:25:10.622140  719740 out.go:169] MINIKUBE_LOCATION=19678
	I0920 19:25:10.624145  719740 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:25:10.625771  719740 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:25:10.627351  719740 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	I0920 19:25:10.628898  719740 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 19:25:10.631511  719740 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 19:25:10.631760  719740 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:25:10.660865  719740 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:25:10.660986  719740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:25:10.721081  719740 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:25:10.70953767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:25:10.721214  719740 docker.go:318] overlay module found
	I0920 19:25:10.722604  719740 out.go:97] Using the docker driver based on user configuration
	I0920 19:25:10.722637  719740 start.go:297] selected driver: docker
	I0920 19:25:10.722644  719740 start.go:901] validating driver "docker" against <nil>
	I0920 19:25:10.722766  719740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:25:10.774540  719740 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:25:10.764748274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:25:10.774769  719740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 19:25:10.775056  719740 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 19:25:10.775215  719740 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 19:25:10.776624  719740 out.go:169] Using Docker driver with root privileges
	I0920 19:25:10.777791  719740 cni.go:84] Creating CNI manager for ""
	I0920 19:25:10.777850  719740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 19:25:10.777858  719740 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 19:25:10.777944  719740 start.go:340] cluster config:
	{Name:download-only-533694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-533694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:25:10.779486  719740 out.go:97] Starting "download-only-533694" primary control-plane node in "download-only-533694" cluster
	I0920 19:25:10.779522  719740 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 19:25:10.780809  719740 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:25:10.780841  719740 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:25:10.781020  719740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 19:25:10.796888  719740 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:25:10.797091  719740 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:25:10.797187  719740 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:25:10.839384  719740 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0920 19:25:10.839410  719740 cache.go:56] Caching tarball of preloaded images
	I0920 19:25:10.839575  719740 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 19:25:10.841100  719740 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 19:25:10.841127  719740 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0920 19:25:10.928496  719740 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0920 19:25:15.231554  719740 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	
	
	* The control-plane node download-only-533694 host does not exist
	  To start a cluster, run: "minikube start -p download-only-533694"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-533694
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-484642 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-484642 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.898440266s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 19:25:27.691265  719734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0920 19:25:27.691304  719734 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-484642
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-484642: exit status 85 (83.966864ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-533694 | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | -p download-only-533694        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| delete  | -p download-only-533694        | download-only-533694 | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC | 20 Sep 24 19:25 UTC |
	| start   | -o=json --download-only        | download-only-484642 | jenkins | v1.34.0 | 20 Sep 24 19:25 UTC |                     |
	|         | -p download-only-484642        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:25:18
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:25:18.832612  719940 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:25:18.832787  719940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:18.832799  719940 out.go:358] Setting ErrFile to fd 2...
	I0920 19:25:18.832804  719940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:18.833032  719940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
	I0920 19:25:18.833485  719940 out.go:352] Setting JSON to true
	I0920 19:25:18.834396  719940 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":11268,"bootTime":1726849051,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 19:25:18.834473  719940 start.go:139] virtualization:  
	I0920 19:25:18.836263  719940 out.go:97] [download-only-484642] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:25:18.836517  719940 notify.go:220] Checking for updates...
	I0920 19:25:18.837595  719940 out.go:169] MINIKUBE_LOCATION=19678
	I0920 19:25:18.838902  719940 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:25:18.840399  719940 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:25:18.841597  719940 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	I0920 19:25:18.842871  719940 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 19:25:18.845135  719940 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 19:25:18.845379  719940 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:25:18.873887  719940 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:25:18.874019  719940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:25:18.936770  719940 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 19:25:18.925236939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:25:18.936888  719940 docker.go:318] overlay module found
	I0920 19:25:18.938287  719940 out.go:97] Using the docker driver based on user configuration
	I0920 19:25:18.938360  719940 start.go:297] selected driver: docker
	I0920 19:25:18.938376  719940 start.go:901] validating driver "docker" against <nil>
	I0920 19:25:18.938486  719940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:25:18.992005  719940 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 19:25:18.982721098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:25:18.992218  719940 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 19:25:18.992506  719940 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 19:25:18.992730  719940 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 19:25:18.994216  719940 out.go:169] Using Docker driver with root privileges
	I0920 19:25:18.995528  719940 cni.go:84] Creating CNI manager for ""
	I0920 19:25:18.995591  719940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 19:25:18.995609  719940 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 19:25:18.995692  719940 start.go:340] cluster config:
	{Name:download-only-484642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-484642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:25:18.997061  719940 out.go:97] Starting "download-only-484642" primary control-plane node in "download-only-484642" cluster
	I0920 19:25:18.997089  719940 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 19:25:18.998404  719940 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:25:18.998430  719940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:18.998601  719940 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 19:25:19.015296  719940 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:25:19.015440  719940 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:25:19.015470  719940 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 19:25:19.015479  719940 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 19:25:19.015487  719940 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 19:25:19.061996  719940 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0920 19:25:19.062022  719940 cache.go:56] Caching tarball of preloaded images
	I0920 19:25:19.062180  719940 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 19:25:19.063616  719940 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 19:25:19.063644  719940 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I0920 19:25:19.259363  719940 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:8285fc512c7462f100de137f91fcd0ae -> /home/jenkins/minikube-integration/19678-712952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-484642 host does not exist
	  To start a cluster, run: "minikube start -p download-only-484642"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-484642
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 19:25:29.025819  719734 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-387387 --alsologtostderr --binary-mirror http://127.0.0.1:34931 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-387387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-387387
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-244316
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-244316: exit status 85 (64.568047ms)

                                                
                                                
-- stdout --
	* Profile "addons-244316" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-244316"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-244316
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-244316: exit status 85 (85.682172ms)

                                                
                                                
-- stdout --
	* Profile "addons-244316" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-244316"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (235.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-244316 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-244316 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m55.773220739s)
--- PASS: TestAddons/Setup (235.77s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-244316 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-244316 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kj4k6" [8ca31dab-8797-4373-93ea-3d69e3e917d1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003987629s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-244316
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-244316: (5.843893616s)
--- PASS: TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 19:37:47.438761  719734 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 19:37:47.445697  719734 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 19:37:47.445735  719734 kapi.go:107] duration metric: took 6.988099ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 6.99847ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-244316 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-244316 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e48b8a51-2684-4fd1-9e9b-6eccfdd18417] Pending
helpers_test.go:344: "task-pv-pod" [e48b8a51-2684-4fd1-9e9b-6eccfdd18417] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e48b8a51-2684-4fd1-9e9b-6eccfdd18417] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004354971s
addons_test.go:528: (dbg) Run:  kubectl --context addons-244316 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-244316 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-244316 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-244316 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-244316 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-244316 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-244316 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [299c3951-54d9-48a8-825d-59352ff0b77b] Pending
helpers_test.go:344: "task-pv-pod-restore" [299c3951-54d9-48a8-825d-59352ff0b77b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [299c3951-54d9-48a8-825d-59352ff0b77b] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004069107s
addons_test.go:570: (dbg) Run:  kubectl --context addons-244316 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-244316 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-244316 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-244316 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.868380111s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-244316 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-244316 --alsologtostderr -v=1: (1.066515414s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-s59px" [ece47c32-e0aa-4742-bf4b-dced7092f618] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-s59px" [ece47c32-e0aa-4742-bf4b-dced7092f618] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-s59px" [ece47c32-e0aa-4742-bf4b-dced7092f618] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004805876s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-244316 addons disable headlamp --alsologtostderr -v=1: (5.910915872s)
--- PASS: TestAddons/parallel/Headlamp (17.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-dp6lg" [8787cfb7-a55c-45a4-8470-b3d8b8bc206f] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.006433922s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-244316
--- PASS: TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-244316 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-244316 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-244316 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f4c95d3c-b90f-4b4b-9bef-d6e0674087d4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f4c95d3c-b90f-4b4b-9bef-d6e0674087d4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f4c95d3c-b90f-4b4b-9bef-d6e0674087d4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004519361s
addons_test.go:938: (dbg) Run:  kubectl --context addons-244316 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 ssh "cat /opt/local-path-provisioner/pvc-e1313a3d-b51a-462f-b9f3-00a0a6f9bc14_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-244316 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-244316 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-n79hn" [be19954c-2529-4f25-bd06-6dde36d7e9e8] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004528326s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-244316
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-pb547" [fa5c777e-4118-4aa1-bdd7-7c1646f7365e] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004592236s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-244316 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-244316 addons disable yakd --alsologtostderr -v=1: (5.826261124s)
--- PASS: TestAddons/parallel/Yakd (10.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-244316
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-244316: (11.983051905s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-244316
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-244316
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-244316
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestCertOptions (36.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-255635 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-255635 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.390011632s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-255635 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-255635 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-255635 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-255635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-255635
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-255635: (2.028193327s)
--- PASS: TestCertOptions (36.11s)

                                                
                                    
x
+
TestCertExpiration (243.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-834793 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-834793 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.794655593s)
E0920 20:27:30.508850  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-834793 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-834793 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.430828066s)
helpers_test.go:175: Cleaning up "cert-expiration-834793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-834793
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-834793: (2.432863714s)
--- PASS: TestCertExpiration (243.66s)

                                                
                                    
x
+
TestForceSystemdFlag (43.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-188929 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-188929 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.200230928s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-188929 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-188929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-188929
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-188929: (3.430553874s)
--- PASS: TestForceSystemdFlag (43.02s)

                                                
                                    
x
+
TestForceSystemdEnv (44.9s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-568845 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-568845 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.145153677s)
helpers_test.go:175: Cleaning up "force-systemd-env-568845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-568845
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-568845: (2.756643502s)
--- PASS: TestForceSystemdEnv (44.90s)

                                                
                                    
x
+
TestErrorSpam/setup (36.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-904164 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-904164 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-904164 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-904164 --driver=docker  --container-runtime=crio: (36.302497133s)
--- PASS: TestErrorSpam/setup (36.30s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 pause
--- PASS: TestErrorSpam/pause (1.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 unpause
--- PASS: TestErrorSpam/unpause (1.96s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 stop: (1.278318898s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-904164 --log_dir /tmp/nospam-904164 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19678-712952/.minikube/files/etc/test/nested/copy/719734/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-539812 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-539812 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (49.995539822s)
--- PASS: TestFunctional/serial/StartWithProxy (50.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 19:46:07.582294  719734 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-539812 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-539812 --alsologtostderr -v=8: (28.580159707s)
functional_test.go:663: soft start took 28.580693849s for "functional-539812" cluster.
I0920 19:46:36.162779  719734 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (28.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-539812 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-539812 cache add registry.k8s.io/pause:3.1: (1.557870043s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-539812 cache add registry.k8s.io/pause:3.3: (1.581688262s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-539812 cache add registry.k8s.io/pause:latest: (1.419905552s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-539812 /tmp/TestFunctionalserialCacheCmdcacheadd_local2504500824/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 cache add minikube-local-cache-test:functional-539812
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 cache delete minikube-local-cache-test:functional-539812
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-539812
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-539812 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (306.053956ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-539812 cache reload: (1.272984272s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 kubectl -- --context functional-539812 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-539812 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-539812 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-539812 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.806808296s)
functional_test.go:761: restart took 34.806904622s for "functional-539812" cluster.
I0920 19:47:20.557072  719734 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (34.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-539812 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-539812 logs: (1.875282573s)
--- PASS: TestFunctional/serial/LogsCmd (1.88s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 logs --file /tmp/TestFunctionalserialLogsFileCmd2653199170/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-539812 logs --file /tmp/TestFunctionalserialLogsFileCmd2653199170/001/logs.txt: (1.840076291s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.28s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-539812 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-539812
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-539812: exit status 115 (506.733238ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31877 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-539812 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-539812 config get cpus: exit status 14 (82.337389ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-539812 config get cpus: exit status 14 (98.332041ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-539812 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-539812 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 747582: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.71s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-539812 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-539812 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (194.895842ms)

                                                
                                                
-- stdout --
	* [functional-539812] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:48:02.782804  747288 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:48:02.783006  747288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:48:02.783012  747288 out.go:358] Setting ErrFile to fd 2...
	I0920 19:48:02.783018  747288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:48:02.783253  747288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
	I0920 19:48:02.783646  747288 out.go:352] Setting JSON to false
	I0920 19:48:02.784601  747288 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":12632,"bootTime":1726849051,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 19:48:02.784677  747288 start.go:139] virtualization:  
	I0920 19:48:02.788136  747288 out.go:177] * [functional-539812] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:48:02.791996  747288 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:48:02.792073  747288 notify.go:220] Checking for updates...
	I0920 19:48:02.797517  747288 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:48:02.800008  747288 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:48:02.802661  747288 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	I0920 19:48:02.805276  747288 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:48:02.807940  747288 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:48:02.811054  747288 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:48:02.811692  747288 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:48:02.842034  747288 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:48:02.842265  747288 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:48:02.908185  747288 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:48:02.897492239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:48:02.908304  747288 docker.go:318] overlay module found
	I0920 19:48:02.911206  747288 out.go:177] * Using the docker driver based on existing profile
	I0920 19:48:02.913694  747288 start.go:297] selected driver: docker
	I0920 19:48:02.913720  747288 start.go:901] validating driver "docker" against &{Name:functional-539812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-539812 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:48:02.913834  747288 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:48:02.917020  747288 out.go:201] 
	W0920 19:48:02.919637  747288 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 19:48:02.922304  747288 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-539812 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-539812 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-539812 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (220.447017ms)

                                                
                                                
-- stdout --
	* [functional-539812] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:48:02.570198  747241 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:48:02.570407  747241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:48:02.570418  747241 out.go:358] Setting ErrFile to fd 2...
	I0920 19:48:02.570424  747241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:48:02.570876  747241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
	I0920 19:48:02.571501  747241 out.go:352] Setting JSON to false
	I0920 19:48:02.572532  747241 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":12632,"bootTime":1726849051,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 19:48:02.572613  747241 start.go:139] virtualization:  
	I0920 19:48:02.576748  747241 out.go:177] * [functional-539812] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0920 19:48:02.579759  747241 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:48:02.579867  747241 notify.go:220] Checking for updates...
	I0920 19:48:02.585075  747241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:48:02.587802  747241 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 19:48:02.590501  747241 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	I0920 19:48:02.593148  747241 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:48:02.595748  747241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:48:02.598875  747241 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:48:02.599603  747241 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:48:02.630979  747241 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:48:02.631135  747241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:48:02.710627  747241 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:48:02.692660022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:48:02.710752  747241 docker.go:318] overlay module found
	I0920 19:48:02.713973  747241 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 19:48:02.716975  747241 start.go:297] selected driver: docker
	I0920 19:48:02.717019  747241 start.go:901] validating driver "docker" against &{Name:functional-539812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-539812 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:48:02.717134  747241 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:48:02.720533  747241 out.go:201] 
	W0920 19:48:02.723636  747241 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 19:48:02.726972  747241 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-539812 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-539812 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-lglrn" [9a39d44a-499d-40b8-b311-6c68c510ecbe] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-lglrn" [9a39d44a-499d-40b8-b311-6c68c510ecbe] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004711653s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32341
functional_test.go:1675: http://192.168.49.2:32341: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-lglrn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32341
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4c7e4f5d-417c-4540-94b4-0a1259ea56fb] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004908057s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-539812 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-539812 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-539812 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-539812 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [23f3168e-3f3b-4b6a-b203-e3d07aa0bf93] Pending
helpers_test.go:344: "sp-pod" [23f3168e-3f3b-4b6a-b203-e3d07aa0bf93] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [23f3168e-3f3b-4b6a-b203-e3d07aa0bf93] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003554734s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-539812 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-539812 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-539812 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [98cc1b4d-2aa5-496b-bac7-bcad0f4d7a9c] Pending
helpers_test.go:344: "sp-pod" [98cc1b4d-2aa5-496b-bac7-bcad0f4d7a9c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0041153s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-539812 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.77s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh -n functional-539812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 cp functional-539812:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1162496435/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh -n functional-539812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh -n functional-539812 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/719734/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "sudo cat /etc/test/nested/copy/719734/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/719734.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "sudo cat /etc/ssl/certs/719734.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/719734.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "sudo cat /usr/share/ca-certificates/719734.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7197342.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "sudo cat /etc/ssl/certs/7197342.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7197342.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "sudo cat /usr/share/ca-certificates/7197342.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-539812 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-539812 ssh "sudo systemctl is-active docker": exit status 1 (300.844743ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-539812 ssh "sudo systemctl is-active containerd": exit status 1 (274.634177ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-539812 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-539812 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-539812 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-539812 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 744989: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-539812 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-539812 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [68598217-2304-4d43-b92d-6ac418e83727] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [68598217-2304-4d43-b92d-6ac418e83727] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003998977s
I0920 19:47:39.986864  719734 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-539812 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.23.143 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-539812 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-539812 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-539812 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-w44tb" [8b6b556c-aa58-41f4-a919-2f3055302657] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-w44tb" [8b6b556c-aa58-41f4-a919-2f3055302657] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.006240816s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "378.072067ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "56.649795ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "381.937901ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "61.916143ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-539812 /tmp/TestFunctionalparallelMountCmdany-port3815428963/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726861678160557645" to /tmp/TestFunctionalparallelMountCmdany-port3815428963/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726861678160557645" to /tmp/TestFunctionalparallelMountCmdany-port3815428963/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726861678160557645" to /tmp/TestFunctionalparallelMountCmdany-port3815428963/001/test-1726861678160557645
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-539812 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (334.053542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 19:47:58.498226  719734 retry.go:31] will retry after 737.902199ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 19:47 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 19:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 19:47 test-1726861678160557645
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh cat /mount-9p/test-1726861678160557645
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-539812 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e683d6bb-90b9-44f6-84c5-164ec233fdc6] Pending
helpers_test.go:344: "busybox-mount" [e683d6bb-90b9-44f6-84c5-164ec233fdc6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e683d6bb-90b9-44f6-84c5-164ec233fdc6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e683d6bb-90b9-44f6-84c5-164ec233fdc6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.005049735s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-539812 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-539812 /tmp/TestFunctionalparallelMountCmdany-port3815428963/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 service list -o json
functional_test.go:1494: Took "575.847826ms" to run "out/minikube-linux-arm64 -p functional-539812 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31791
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31791
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-539812 /tmp/TestFunctionalparallelMountCmdspecific-port4200096868/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-539812 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (638.304855ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 19:48:08.975956  719734 retry.go:31] will retry after 525.408237ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-539812 /tmp/TestFunctionalparallelMountCmdspecific-port4200096868/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-539812 ssh "sudo umount -f /mount-9p": exit status 1 (402.605559ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-539812 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-539812 /tmp/TestFunctionalparallelMountCmdspecific-port4200096868/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-539812 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2307555770/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-539812 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2307555770/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-539812 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2307555770/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-539812 ssh "findmnt -T" /mount1: (1.166457372s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-539812 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-539812 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2307555770/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-539812 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2307555770/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-539812 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2307555770/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-539812 version -o=json --components: (1.316695971s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-539812 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-539812
localhost/kicbase/echo-server:functional-539812
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-539812 image ls --format short --alsologtostderr:
I0920 19:48:21.664171  750066 out.go:345] Setting OutFile to fd 1 ...
I0920 19:48:21.664400  750066 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:48:21.664427  750066 out.go:358] Setting ErrFile to fd 2...
I0920 19:48:21.664447  750066 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:48:21.664765  750066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
I0920 19:48:21.665611  750066 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:48:21.665798  750066 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:48:21.666363  750066 cli_runner.go:164] Run: docker container inspect functional-539812 --format={{.State.Status}}
I0920 19:48:21.683918  750066 ssh_runner.go:195] Run: systemctl --version
I0920 19:48:21.683993  750066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-539812
I0920 19:48:21.711031  750066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/functional-539812/id_rsa Username:docker}
I0920 19:48:21.817630  750066 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-539812 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | b887aca7aed61 | 48.4MB |
| localhost/minikube-local-cache-test     | functional-539812  | 54404ec87a2ce | 3.33kB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/kicbase/echo-server           | functional-539812  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| docker.io/library/nginx                 | latest             | 195245f0c7927 | 197MB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-539812 image ls --format table --alsologtostderr:
I0920 19:48:22.362795  750221 out.go:345] Setting OutFile to fd 1 ...
I0920 19:48:22.367355  750221 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:48:22.367406  750221 out.go:358] Setting ErrFile to fd 2...
I0920 19:48:22.367438  750221 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:48:22.367913  750221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
I0920 19:48:22.370121  750221 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:48:22.370453  750221 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:48:22.371867  750221 cli_runner.go:164] Run: docker container inspect functional-539812 --format={{.State.Status}}
I0920 19:48:22.398048  750221 ssh_runner.go:195] Run: systemctl --version
I0920 19:48:22.398111  750221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-539812
I0920 19:48:22.420387  750221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/functional-539812/id_rsa Username:docker}
I0920 19:48:22.521916  750221 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-539812 image ls --format json --alsologtostderr:
[{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172029"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags"
:["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],
"size":"528622"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53","docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5
068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48375489"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-539812"],"size":"4788229"},{"id":"54404ec87a2ce485bcdcc62d34f54a31df3a35daec182fdab04dc93d02c0e175","repoDigests":["localhost/minikube-local-cache-test@sha256:ea4d07da1205cb2248576bfad8c1c38bbfd714caf9be5d6c37a6e73a123b091b"],"repoTags":["localhost/minikube-local-cache-test:functional-539812"],"size":"3330"},{"id":"7f8aa378bb47dffc
f430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDig
ests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDiges
ts":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-539812 image ls --format json --alsologtostderr:
I0920 19:48:22.019589  750130 out.go:345] Setting OutFile to fd 1 ...
I0920 19:48:22.019878  750130 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:48:22.019910  750130 out.go:358] Setting ErrFile to fd 2...
I0920 19:48:22.019946  750130 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:48:22.020415  750130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
I0920 19:48:22.023949  750130 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:48:22.024323  750130 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:48:22.024949  750130 cli_runner.go:164] Run: docker container inspect functional-539812 --format={{.State.Status}}
I0920 19:48:22.061004  750130 ssh_runner.go:195] Run: systemctl --version
I0920 19:48:22.061062  750130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-539812
I0920 19:48:22.084301  750130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/functional-539812/id_rsa Username:docker}
I0920 19:48:22.185988  750130 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-539812 image ls --format yaml --alsologtostderr:
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "48375489"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171
repoTags:
- docker.io/library/nginx:latest
size: "197172029"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-539812
size: "4788229"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 54404ec87a2ce485bcdcc62d34f54a31df3a35daec182fdab04dc93d02c0e175
repoDigests:
- localhost/minikube-local-cache-test@sha256:ea4d07da1205cb2248576bfad8c1c38bbfd714caf9be5d6c37a6e73a123b091b
repoTags:
- localhost/minikube-local-cache-test:functional-539812
size: "3330"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-539812 image ls --format yaml --alsologtostderr:
I0920 19:48:21.667551  750067 out.go:345] Setting OutFile to fd 1 ...
I0920 19:48:21.667729  750067 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:48:21.667741  750067 out.go:358] Setting ErrFile to fd 2...
I0920 19:48:21.667747  750067 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:48:21.667992  750067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
I0920 19:48:21.668663  750067 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:48:21.668853  750067 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:48:21.669369  750067 cli_runner.go:164] Run: docker container inspect functional-539812 --format={{.State.Status}}
I0920 19:48:21.692151  750067 ssh_runner.go:195] Run: systemctl --version
I0920 19:48:21.692220  750067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-539812
I0920 19:48:21.719397  750067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/functional-539812/id_rsa Username:docker}
I0920 19:48:21.841303  750067 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-539812 ssh pgrep buildkitd: exit status 1 (336.670941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image build -t localhost/my-image:functional-539812 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-539812 image build -t localhost/my-image:functional-539812 testdata/build --alsologtostderr: (3.317342449s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-539812 image build -t localhost/my-image:functional-539812 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> aa36c7bf68b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-539812
--> 482b5ab874e
Successfully tagged localhost/my-image:functional-539812
482b5ab874ee541a35bc1aba6c4485e96afe6a1b9ab52c4d69cff2e9268ba050
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-539812 image build -t localhost/my-image:functional-539812 testdata/build --alsologtostderr:
I0920 19:48:22.297992  750216 out.go:345] Setting OutFile to fd 1 ...
I0920 19:48:22.298672  750216 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:48:22.298698  750216 out.go:358] Setting ErrFile to fd 2...
I0920 19:48:22.298705  750216 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:48:22.298991  750216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
I0920 19:48:22.299688  750216 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:48:22.300288  750216 config.go:182] Loaded profile config "functional-539812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:48:22.300880  750216 cli_runner.go:164] Run: docker container inspect functional-539812 --format={{.State.Status}}
I0920 19:48:22.324276  750216 ssh_runner.go:195] Run: systemctl --version
I0920 19:48:22.324340  750216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-539812
I0920 19:48:22.357978  750216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/functional-539812/id_rsa Username:docker}
I0920 19:48:22.463252  750216 build_images.go:161] Building image from path: /tmp/build.3675998720.tar
I0920 19:48:22.463331  750216 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 19:48:22.475645  750216 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3675998720.tar
I0920 19:48:22.479910  750216 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3675998720.tar: stat -c "%s %y" /var/lib/minikube/build/build.3675998720.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3675998720.tar': No such file or directory
I0920 19:48:22.479951  750216 ssh_runner.go:362] scp /tmp/build.3675998720.tar --> /var/lib/minikube/build/build.3675998720.tar (3072 bytes)
I0920 19:48:22.507861  750216 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3675998720
I0920 19:48:22.517897  750216 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3675998720 -xf /var/lib/minikube/build/build.3675998720.tar
I0920 19:48:22.529470  750216 crio.go:315] Building image: /var/lib/minikube/build/build.3675998720
I0920 19:48:22.529564  750216 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-539812 /var/lib/minikube/build/build.3675998720 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0920 19:48:25.503844  750216 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-539812 /var/lib/minikube/build/build.3675998720 --cgroup-manager=cgroupfs: (2.974254137s)
I0920 19:48:25.503969  750216 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3675998720
I0920 19:48:25.516269  750216 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3675998720.tar
I0920 19:48:25.530337  750216 build_images.go:217] Built localhost/my-image:functional-539812 from /tmp/build.3675998720.tar
I0920 19:48:25.530370  750216 build_images.go:133] succeeded building to: functional-539812
I0920 19:48:25.530376  750216 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-539812
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image load --daemon kicbase/echo-server:functional-539812 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-539812 image load --daemon kicbase/echo-server:functional-539812 --alsologtostderr: (1.205104995s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image load --daemon kicbase/echo-server:functional-539812 --alsologtostderr
2024/09/20 19:48:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-539812
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image load --daemon kicbase/echo-server:functional-539812 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image save kicbase/echo-server:functional-539812 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image rm kicbase/echo-server:functional-539812 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-539812
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-539812 image save --daemon kicbase/echo-server:functional-539812 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-539812
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-539812
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-539812
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-539812
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (177.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-688277 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0920 19:49:26.006946  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:49:26.013536  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:49:26.024953  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:49:26.046342  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:49:26.087714  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:49:26.169154  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:49:26.330666  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:49:26.652574  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:49:27.294279  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:49:28.575701  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:49:31.137899  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:49:36.259647  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:49:46.501397  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:50:06.983395  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:50:47.945519  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-688277 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m56.836844011s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (177.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-688277 -- rollout status deployment/busybox: (6.804368215s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-b4p5n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-dvw25 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-rx7lk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-b4p5n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-dvw25 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-rx7lk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-b4p5n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-dvw25 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-rx7lk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-b4p5n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-b4p5n -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-dvw25 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-dvw25 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-rx7lk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-688277 -- exec busybox-7dff88458-rx7lk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-688277 -v=7 --alsologtostderr
E0920 19:52:09.867817  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-688277 -v=7 --alsologtostderr: (35.848360514s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-688277 status -v=7 --alsologtostderr: (1.093462213s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-688277 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.108512521s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-688277 status --output json -v=7 --alsologtostderr: (1.028312004s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp testdata/cp-test.txt ha-688277:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2586449424/001/cp-test_ha-688277.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277:/home/docker/cp-test.txt ha-688277-m02:/home/docker/cp-test_ha-688277_ha-688277-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m02 "sudo cat /home/docker/cp-test_ha-688277_ha-688277-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277:/home/docker/cp-test.txt ha-688277-m03:/home/docker/cp-test_ha-688277_ha-688277-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m03 "sudo cat /home/docker/cp-test_ha-688277_ha-688277-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277:/home/docker/cp-test.txt ha-688277-m04:/home/docker/cp-test_ha-688277_ha-688277-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m04 "sudo cat /home/docker/cp-test_ha-688277_ha-688277-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp testdata/cp-test.txt ha-688277-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2586449424/001/cp-test_ha-688277-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277-m02:/home/docker/cp-test.txt ha-688277:/home/docker/cp-test_ha-688277-m02_ha-688277.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277 "sudo cat /home/docker/cp-test_ha-688277-m02_ha-688277.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277-m02:/home/docker/cp-test.txt ha-688277-m03:/home/docker/cp-test_ha-688277-m02_ha-688277-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m03 "sudo cat /home/docker/cp-test_ha-688277-m02_ha-688277-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277-m02:/home/docker/cp-test.txt ha-688277-m04:/home/docker/cp-test_ha-688277-m02_ha-688277-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m04 "sudo cat /home/docker/cp-test_ha-688277-m02_ha-688277-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp testdata/cp-test.txt ha-688277-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2586449424/001/cp-test_ha-688277-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277-m03:/home/docker/cp-test.txt ha-688277:/home/docker/cp-test_ha-688277-m03_ha-688277.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277 "sudo cat /home/docker/cp-test_ha-688277-m03_ha-688277.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277-m03:/home/docker/cp-test.txt ha-688277-m02:/home/docker/cp-test_ha-688277-m03_ha-688277-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m02 "sudo cat /home/docker/cp-test_ha-688277-m03_ha-688277-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277-m03:/home/docker/cp-test.txt ha-688277-m04:/home/docker/cp-test_ha-688277-m03_ha-688277-m04.txt
E0920 19:52:30.507663  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:52:30.515130  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:52:30.527394  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:52:30.548932  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:52:30.590314  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:52:30.671755  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m03 "sudo cat /home/docker/cp-test.txt"
E0920 19:52:30.833917  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m04 "sudo cat /home/docker/cp-test_ha-688277-m03_ha-688277-m04.txt"
E0920 19:52:31.156080  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp testdata/cp-test.txt ha-688277-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m04 "sudo cat /home/docker/cp-test.txt"
E0920 19:52:31.798327  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2586449424/001/cp-test_ha-688277-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277-m04:/home/docker/cp-test.txt ha-688277:/home/docker/cp-test_ha-688277-m04_ha-688277.txt
E0920 19:52:33.080527  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277 "sudo cat /home/docker/cp-test_ha-688277-m04_ha-688277.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277-m04:/home/docker/cp-test.txt ha-688277-m02:/home/docker/cp-test_ha-688277-m04_ha-688277-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m02 "sudo cat /home/docker/cp-test_ha-688277-m04_ha-688277-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 cp ha-688277-m04:/home/docker/cp-test.txt ha-688277-m03:/home/docker/cp-test_ha-688277-m04_ha-688277-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m04 "sudo cat /home/docker/cp-test.txt"
E0920 19:52:35.647200  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 ssh -n ha-688277-m03 "sudo cat /home/docker/cp-test_ha-688277-m04_ha-688277-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 node stop m02 -v=7 --alsologtostderr
E0920 19:52:40.769330  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-688277 node stop m02 -v=7 --alsologtostderr: (12.04026069s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-688277 status -v=7 --alsologtostderr: exit status 7 (768.586175ms)

                                                
                                                
-- stdout --
	ha-688277
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-688277-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-688277-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-688277-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:52:48.296268  765942 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:52:48.296387  765942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:52:48.296392  765942 out.go:358] Setting ErrFile to fd 2...
	I0920 19:52:48.296398  765942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:52:48.296650  765942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
	I0920 19:52:48.296899  765942 out.go:352] Setting JSON to false
	I0920 19:52:48.296941  765942 mustload.go:65] Loading cluster: ha-688277
	I0920 19:52:48.297045  765942 notify.go:220] Checking for updates...
	I0920 19:52:48.297406  765942 config.go:182] Loaded profile config "ha-688277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:52:48.297420  765942 status.go:174] checking status of ha-688277 ...
	I0920 19:52:48.297959  765942 cli_runner.go:164] Run: docker container inspect ha-688277 --format={{.State.Status}}
	I0920 19:52:48.320549  765942 status.go:364] ha-688277 host status = "Running" (err=<nil>)
	I0920 19:52:48.320573  765942 host.go:66] Checking if "ha-688277" exists ...
	I0920 19:52:48.320920  765942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-688277
	I0920 19:52:48.347561  765942 host.go:66] Checking if "ha-688277" exists ...
	I0920 19:52:48.347889  765942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:52:48.347944  765942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277
	I0920 19:52:48.371585  765942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277/id_rsa Username:docker}
	I0920 19:52:48.470299  765942 ssh_runner.go:195] Run: systemctl --version
	I0920 19:52:48.475489  765942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:52:48.489288  765942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:52:48.545225  765942 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-20 19:52:48.534429076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:52:48.545836  765942 kubeconfig.go:125] found "ha-688277" server: "https://192.168.49.254:8443"
	I0920 19:52:48.545873  765942 api_server.go:166] Checking apiserver status ...
	I0920 19:52:48.545917  765942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:52:48.558710  765942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1412/cgroup
	I0920 19:52:48.569236  765942 api_server.go:182] apiserver freezer: "7:freezer:/docker/5961ba43cb33c469ae1bae1ac1a7cd8f88f8553016ef942103ee6b9be5b14c7c/crio/crio-4405388d73c8d99445a7a898854b0b79f92706b5fb484bcbb7f0f88bf5683c74"
	I0920 19:52:48.569311  765942 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5961ba43cb33c469ae1bae1ac1a7cd8f88f8553016ef942103ee6b9be5b14c7c/crio/crio-4405388d73c8d99445a7a898854b0b79f92706b5fb484bcbb7f0f88bf5683c74/freezer.state
	I0920 19:52:48.578639  765942 api_server.go:204] freezer state: "THAWED"
	I0920 19:52:48.578668  765942 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 19:52:48.587822  765942 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 19:52:48.587851  765942 status.go:456] ha-688277 apiserver status = Running (err=<nil>)
	I0920 19:52:48.587863  765942 status.go:176] ha-688277 status: &{Name:ha-688277 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:52:48.587881  765942 status.go:174] checking status of ha-688277-m02 ...
	I0920 19:52:48.588214  765942 cli_runner.go:164] Run: docker container inspect ha-688277-m02 --format={{.State.Status}}
	I0920 19:52:48.611626  765942 status.go:364] ha-688277-m02 host status = "Stopped" (err=<nil>)
	I0920 19:52:48.611647  765942 status.go:377] host is not running, skipping remaining checks
	I0920 19:52:48.611662  765942 status.go:176] ha-688277-m02 status: &{Name:ha-688277-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:52:48.611683  765942 status.go:174] checking status of ha-688277-m03 ...
	I0920 19:52:48.613025  765942 cli_runner.go:164] Run: docker container inspect ha-688277-m03 --format={{.State.Status}}
	I0920 19:52:48.641118  765942 status.go:364] ha-688277-m03 host status = "Running" (err=<nil>)
	I0920 19:52:48.641177  765942 host.go:66] Checking if "ha-688277-m03" exists ...
	I0920 19:52:48.641756  765942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-688277-m03
	I0920 19:52:48.661580  765942 host.go:66] Checking if "ha-688277-m03" exists ...
	I0920 19:52:48.661914  765942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:52:48.661961  765942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m03
	I0920 19:52:48.680679  765942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277-m03/id_rsa Username:docker}
	I0920 19:52:48.782869  765942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:52:48.795765  765942 kubeconfig.go:125] found "ha-688277" server: "https://192.168.49.254:8443"
	I0920 19:52:48.795830  765942 api_server.go:166] Checking apiserver status ...
	I0920 19:52:48.795879  765942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:52:48.807090  765942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1330/cgroup
	I0920 19:52:48.817849  765942 api_server.go:182] apiserver freezer: "7:freezer:/docker/dc0d6a647f44f85326c4ae3a7770ea2a2026c901617dba4d4c64e0a2757d56eb/crio/crio-a19c5c9ea87b3bcd9c528bb534d40fc0b6135ca619cecf2f1152dece1d329bdb"
	I0920 19:52:48.818033  765942 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dc0d6a647f44f85326c4ae3a7770ea2a2026c901617dba4d4c64e0a2757d56eb/crio/crio-a19c5c9ea87b3bcd9c528bb534d40fc0b6135ca619cecf2f1152dece1d329bdb/freezer.state
	I0920 19:52:48.828770  765942 api_server.go:204] freezer state: "THAWED"
	I0920 19:52:48.828811  765942 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 19:52:48.836619  765942 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 19:52:48.836668  765942 status.go:456] ha-688277-m03 apiserver status = Running (err=<nil>)
	I0920 19:52:48.836679  765942 status.go:176] ha-688277-m03 status: &{Name:ha-688277-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:52:48.836810  765942 status.go:174] checking status of ha-688277-m04 ...
	I0920 19:52:48.837142  765942 cli_runner.go:164] Run: docker container inspect ha-688277-m04 --format={{.State.Status}}
	I0920 19:52:48.853999  765942 status.go:364] ha-688277-m04 host status = "Running" (err=<nil>)
	I0920 19:52:48.854026  765942 host.go:66] Checking if "ha-688277-m04" exists ...
	I0920 19:52:48.854336  765942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-688277-m04
	I0920 19:52:48.871342  765942 host.go:66] Checking if "ha-688277-m04" exists ...
	I0920 19:52:48.871751  765942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:52:48.871854  765942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-688277-m04
	I0920 19:52:48.889922  765942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/ha-688277-m04/id_rsa Username:docker}
	I0920 19:52:48.994029  765942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:52:49.010231  765942 status.go:176] ha-688277-m04 status: &{Name:ha-688277-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 node start m02 -v=7 --alsologtostderr
E0920 19:52:51.011381  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-688277 node start m02 -v=7 --alsologtostderr: (20.51338234s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 status -v=7 --alsologtostderr
E0920 19:53:11.494182  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-688277 status -v=7 --alsologtostderr: (1.396343304s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.339880709s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (204.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-688277 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-688277 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-688277 -v=7 --alsologtostderr: (37.201347637s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-688277 --wait=true -v=7 --alsologtostderr
E0920 19:53:52.455546  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:54:26.004970  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:54:53.709309  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:55:14.377133  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-688277 --wait=true -v=7 --alsologtostderr: (2m46.857875715s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-688277
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (204.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-688277 node delete m03 -v=7 --alsologtostderr: (12.163264688s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-688277 stop -v=7 --alsologtostderr: (35.977473138s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-688277 status -v=7 --alsologtostderr: exit status 7 (118.354415ms)

                                                
                                                
-- stdout --
	ha-688277
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-688277-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-688277-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:57:27.483318  780606 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:57:27.483457  780606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:57:27.483470  780606 out.go:358] Setting ErrFile to fd 2...
	I0920 19:57:27.483476  780606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:57:27.483730  780606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
	I0920 19:57:27.483954  780606 out.go:352] Setting JSON to false
	I0920 19:57:27.484001  780606 mustload.go:65] Loading cluster: ha-688277
	I0920 19:57:27.484083  780606 notify.go:220] Checking for updates...
	I0920 19:57:27.484471  780606 config.go:182] Loaded profile config "ha-688277": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:57:27.484487  780606 status.go:174] checking status of ha-688277 ...
	I0920 19:57:27.485392  780606 cli_runner.go:164] Run: docker container inspect ha-688277 --format={{.State.Status}}
	I0920 19:57:27.503554  780606 status.go:364] ha-688277 host status = "Stopped" (err=<nil>)
	I0920 19:57:27.503575  780606 status.go:377] host is not running, skipping remaining checks
	I0920 19:57:27.503582  780606 status.go:176] ha-688277 status: &{Name:ha-688277 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:57:27.503605  780606 status.go:174] checking status of ha-688277-m02 ...
	I0920 19:57:27.503933  780606 cli_runner.go:164] Run: docker container inspect ha-688277-m02 --format={{.State.Status}}
	I0920 19:57:27.530296  780606 status.go:364] ha-688277-m02 host status = "Stopped" (err=<nil>)
	I0920 19:57:27.530318  780606 status.go:377] host is not running, skipping remaining checks
	I0920 19:57:27.530332  780606 status.go:176] ha-688277-m02 status: &{Name:ha-688277-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:57:27.530352  780606 status.go:174] checking status of ha-688277-m04 ...
	I0920 19:57:27.530656  780606 cli_runner.go:164] Run: docker container inspect ha-688277-m04 --format={{.State.Status}}
	I0920 19:57:27.552793  780606 status.go:364] ha-688277-m04 host status = "Stopped" (err=<nil>)
	I0920 19:57:27.552816  780606 status.go:377] host is not running, skipping remaining checks
	I0920 19:57:27.552824  780606 status.go:176] ha-688277-m04 status: &{Name:ha-688277-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.006341285s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-688277 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-688277 --control-plane -v=7 --alsologtostderr: (1m9.926417682s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-688277 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-688277 status -v=7 --alsologtostderr: (1.032121131s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.027056361s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-164700 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-164700 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (47.931316962s)
--- PASS: TestJSONOutput/start/Command (47.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-164700 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.01s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-164700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 unpause -p json-output-164700 --output=json --user=testUser: (1.010291328s)
--- PASS: TestJSONOutput/unpause/Command (1.01s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-164700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-164700 --output=json --user=testUser: (5.891011224s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-007418 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-007418 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.960663ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c799f0ea-b95d-4301-8d5d-ca3ce659d5a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-007418] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f0e2b03-e390-4aa5-b680-234fb8bac31b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"b999c6c9-daf7-4c54-bfdb-303c2452b399","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"be1dc448-a113-4db4-8f5e-50549696781d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig"}}
	{"specversion":"1.0","id":"f5e6c324-5d78-4232-bc80-7623b364316c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube"}}
	{"specversion":"1.0","id":"85b44ee0-1b5f-459d-bf5d-9f5cd6fd47ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"947533e7-af95-4a70-bfbf-89b1ecbf86d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7d741272-74c1-4f98-bdb2-4a0a7f2b06fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-007418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-007418
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-718226 --network=
E0920 20:02:30.507870  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-718226 --network=: (37.146156278s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-718226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-718226
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-718226: (2.07880911s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.25s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-755494 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-755494 --network=bridge: (33.284511898s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-755494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-755494
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-755494: (2.042795494s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.35s)

                                                
                                    
x
+
TestKicExistingNetwork (35.5s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 20:03:25.832104  719734 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 20:03:25.846051  719734 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 20:03:25.846135  719734 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 20:03:25.846152  719734 cli_runner.go:164] Run: docker network inspect existing-network
W0920 20:03:25.861748  719734 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 20:03:25.861783  719734 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 20:03:25.861802  719734 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 20:03:25.861929  719734 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 20:03:25.882110  719734 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e800fa7f1d9b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0f:18:87:3d} reservation:<nil>}
I0920 20:03:25.882629  719734 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ccb220}
I0920 20:03:25.882661  719734 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0920 20:03:25.882720  719734 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 20:03:25.954670  719734 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-690066 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-690066 --network=existing-network: (33.168669583s)
helpers_test.go:175: Cleaning up "existing-network-690066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-690066
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-690066: (2.173926133s)
I0920 20:04:01.312261  719734 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.50s)

                                                
                                    
x
+
TestKicCustomSubnet (33.79s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-436051 --subnet=192.168.60.0/24
E0920 20:04:26.004860  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-436051 --subnet=192.168.60.0/24: (31.536984519s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-436051 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-436051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-436051
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-436051: (2.21681444s)
--- PASS: TestKicCustomSubnet (33.79s)

                                                
                                    
x
+
TestKicStaticIP (37.15s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-271698 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-271698 --static-ip=192.168.200.200: (34.786672866s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-271698 ip
helpers_test.go:175: Cleaning up "static-ip-271698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-271698
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-271698: (2.156608263s)
--- PASS: TestKicStaticIP (37.15s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (72.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-202775 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-202775 --driver=docker  --container-runtime=crio: (32.901231143s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-205212 --driver=docker  --container-runtime=crio
E0920 20:05:49.072932  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-205212 --driver=docker  --container-runtime=crio: (34.143608576s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-202775
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-205212
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-205212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-205212
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-205212: (1.992477762s)
helpers_test.go:175: Cleaning up "first-202775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-202775
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-202775: (1.988562211s)
--- PASS: TestMinikubeProfile (72.50s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-982557 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-982557 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.970052066s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-982557 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-984403 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-984403 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.269036915s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-984403 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-982557 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-982557 --alsologtostderr -v=5: (1.736949934s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-984403 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-984403
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-984403: (1.215301151s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-984403
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-984403: (7.216801823s)
--- PASS: TestMountStart/serial/RestartStopped (8.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-984403 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-500038 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0920 20:07:30.507441  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-500038 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m45.271331778s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-500038 -- rollout status deployment/busybox: (4.788463643s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- exec busybox-7dff88458-qfzvz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- exec busybox-7dff88458-qjtnn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- exec busybox-7dff88458-qfzvz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- exec busybox-7dff88458-qjtnn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- exec busybox-7dff88458-qfzvz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- exec busybox-7dff88458-qjtnn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- exec busybox-7dff88458-qfzvz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- exec busybox-7dff88458-qfzvz -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- exec busybox-7dff88458-qjtnn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-500038 -- exec busybox-7dff88458-qjtnn -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-500038 -v 3 --alsologtostderr
E0920 20:08:53.580548  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:09:26.003564  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-500038 -v 3 --alsologtostderr: (56.40038939s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-500038 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 cp testdata/cp-test.txt multinode-500038:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 cp multinode-500038:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile866302211/001/cp-test_multinode-500038.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 cp multinode-500038:/home/docker/cp-test.txt multinode-500038-m02:/home/docker/cp-test_multinode-500038_multinode-500038-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038-m02 "sudo cat /home/docker/cp-test_multinode-500038_multinode-500038-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 cp multinode-500038:/home/docker/cp-test.txt multinode-500038-m03:/home/docker/cp-test_multinode-500038_multinode-500038-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038-m03 "sudo cat /home/docker/cp-test_multinode-500038_multinode-500038-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 cp testdata/cp-test.txt multinode-500038-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 cp multinode-500038-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile866302211/001/cp-test_multinode-500038-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 cp multinode-500038-m02:/home/docker/cp-test.txt multinode-500038:/home/docker/cp-test_multinode-500038-m02_multinode-500038.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038 "sudo cat /home/docker/cp-test_multinode-500038-m02_multinode-500038.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 cp multinode-500038-m02:/home/docker/cp-test.txt multinode-500038-m03:/home/docker/cp-test_multinode-500038-m02_multinode-500038-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038-m03 "sudo cat /home/docker/cp-test_multinode-500038-m02_multinode-500038-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 cp testdata/cp-test.txt multinode-500038-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 cp multinode-500038-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile866302211/001/cp-test_multinode-500038-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 cp multinode-500038-m03:/home/docker/cp-test.txt multinode-500038:/home/docker/cp-test_multinode-500038-m03_multinode-500038.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038 "sudo cat /home/docker/cp-test_multinode-500038-m03_multinode-500038.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 cp multinode-500038-m03:/home/docker/cp-test.txt multinode-500038-m02:/home/docker/cp-test_multinode-500038-m03_multinode-500038-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 ssh -n multinode-500038-m02 "sudo cat /home/docker/cp-test_multinode-500038-m03_multinode-500038-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-500038 node stop m03: (1.397372784s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-500038 status: exit status 7 (742.916869ms)

                                                
                                                
-- stdout --
	multinode-500038
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-500038-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-500038-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-500038 status --alsologtostderr: exit status 7 (573.031113ms)

                                                
                                                
-- stdout --
	multinode-500038
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-500038-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-500038-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 20:10:01.233614  835137 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:10:01.235093  835137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:10:01.235107  835137 out.go:358] Setting ErrFile to fd 2...
	I0920 20:10:01.235114  835137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:10:01.235443  835137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
	I0920 20:10:01.235682  835137 out.go:352] Setting JSON to false
	I0920 20:10:01.235738  835137 mustload.go:65] Loading cluster: multinode-500038
	I0920 20:10:01.235848  835137 notify.go:220] Checking for updates...
	I0920 20:10:01.236280  835137 config.go:182] Loaded profile config "multinode-500038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 20:10:01.236306  835137 status.go:174] checking status of multinode-500038 ...
	I0920 20:10:01.237034  835137 cli_runner.go:164] Run: docker container inspect multinode-500038 --format={{.State.Status}}
	I0920 20:10:01.259165  835137 status.go:364] multinode-500038 host status = "Running" (err=<nil>)
	I0920 20:10:01.259197  835137 host.go:66] Checking if "multinode-500038" exists ...
	I0920 20:10:01.259525  835137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-500038
	I0920 20:10:01.297384  835137 host.go:66] Checking if "multinode-500038" exists ...
	I0920 20:10:01.297868  835137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 20:10:01.298047  835137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-500038
	I0920 20:10:01.320534  835137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/multinode-500038/id_rsa Username:docker}
	I0920 20:10:01.422861  835137 ssh_runner.go:195] Run: systemctl --version
	I0920 20:10:01.427869  835137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 20:10:01.441075  835137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:10:01.497156  835137 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-20 20:10:01.48625529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 20:10:01.497855  835137 kubeconfig.go:125] found "multinode-500038" server: "https://192.168.67.2:8443"
	I0920 20:10:01.497903  835137 api_server.go:166] Checking apiserver status ...
	I0920 20:10:01.498016  835137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:10:01.510657  835137 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup
	I0920 20:10:01.521741  835137 api_server.go:182] apiserver freezer: "7:freezer:/docker/885bd1ffe561e14af261b54ebc064558637ec6dca1b5863d476553edc46e0881/crio/crio-de032b29e0727c7edca1e98923cf9748db1f6c33ad4830f07074e7cd2a4ff947"
	I0920 20:10:01.521826  835137 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/885bd1ffe561e14af261b54ebc064558637ec6dca1b5863d476553edc46e0881/crio/crio-de032b29e0727c7edca1e98923cf9748db1f6c33ad4830f07074e7cd2a4ff947/freezer.state
	I0920 20:10:01.532255  835137 api_server.go:204] freezer state: "THAWED"
	I0920 20:10:01.532332  835137 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0920 20:10:01.541996  835137 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0920 20:10:01.542031  835137 status.go:456] multinode-500038 apiserver status = Running (err=<nil>)
	I0920 20:10:01.542043  835137 status.go:176] multinode-500038 status: &{Name:multinode-500038 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 20:10:01.542062  835137 status.go:174] checking status of multinode-500038-m02 ...
	I0920 20:10:01.542407  835137 cli_runner.go:164] Run: docker container inspect multinode-500038-m02 --format={{.State.Status}}
	I0920 20:10:01.567931  835137 status.go:364] multinode-500038-m02 host status = "Running" (err=<nil>)
	I0920 20:10:01.567963  835137 host.go:66] Checking if "multinode-500038-m02" exists ...
	I0920 20:10:01.568280  835137 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-500038-m02
	I0920 20:10:01.586523  835137 host.go:66] Checking if "multinode-500038-m02" exists ...
	I0920 20:10:01.586862  835137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 20:10:01.586913  835137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-500038-m02
	I0920 20:10:01.605588  835137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19678-712952/.minikube/machines/multinode-500038-m02/id_rsa Username:docker}
	I0920 20:10:01.706792  835137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 20:10:01.719768  835137 status.go:176] multinode-500038-m02 status: &{Name:multinode-500038-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 20:10:01.719816  835137 status.go:174] checking status of multinode-500038-m03 ...
	I0920 20:10:01.720160  835137 cli_runner.go:164] Run: docker container inspect multinode-500038-m03 --format={{.State.Status}}
	I0920 20:10:01.739313  835137 status.go:364] multinode-500038-m03 host status = "Stopped" (err=<nil>)
	I0920 20:10:01.739338  835137 status.go:377] host is not running, skipping remaining checks
	I0920 20:10:01.739345  835137 status.go:176] multinode-500038-m03 status: &{Name:multinode-500038-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.71s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-500038 node start m03 -v=7 --alsologtostderr: (9.422460937s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (106.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-500038
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-500038
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-500038: (24.903406692s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-500038 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-500038 --wait=true -v=8 --alsologtostderr: (1m21.923951196s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-500038
--- PASS: TestMultiNode/serial/RestartKeepsNodes (106.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-500038 node delete m03: (5.34236815s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-500038 stop: (23.744720051s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-500038 status: exit status 7 (100.955607ms)

                                                
                                                
-- stdout --
	multinode-500038
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-500038-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-500038 status --alsologtostderr: exit status 7 (100.168531ms)

                                                
                                                
-- stdout --
	multinode-500038
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-500038-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 20:12:28.916645  842967 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:12:28.916912  842967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:12:28.916939  842967 out.go:358] Setting ErrFile to fd 2...
	I0920 20:12:28.916961  842967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:12:28.917256  842967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
	I0920 20:12:28.917485  842967 out.go:352] Setting JSON to false
	I0920 20:12:28.917583  842967 mustload.go:65] Loading cluster: multinode-500038
	I0920 20:12:28.917626  842967 notify.go:220] Checking for updates...
	I0920 20:12:28.918120  842967 config.go:182] Loaded profile config "multinode-500038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 20:12:28.918449  842967 status.go:174] checking status of multinode-500038 ...
	I0920 20:12:28.919253  842967 cli_runner.go:164] Run: docker container inspect multinode-500038 --format={{.State.Status}}
	I0920 20:12:28.937284  842967 status.go:364] multinode-500038 host status = "Stopped" (err=<nil>)
	I0920 20:12:28.937319  842967 status.go:377] host is not running, skipping remaining checks
	I0920 20:12:28.937327  842967 status.go:176] multinode-500038 status: &{Name:multinode-500038 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 20:12:28.937374  842967 status.go:174] checking status of multinode-500038-m02 ...
	I0920 20:12:28.937706  842967 cli_runner.go:164] Run: docker container inspect multinode-500038-m02 --format={{.State.Status}}
	I0920 20:12:28.966536  842967 status.go:364] multinode-500038-m02 host status = "Stopped" (err=<nil>)
	I0920 20:12:28.966558  842967 status.go:377] host is not running, skipping remaining checks
	I0920 20:12:28.966565  842967 status.go:176] multinode-500038-m02 status: &{Name:multinode-500038-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (63.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-500038 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0920 20:12:30.507657  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-500038 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m2.694770303s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-500038 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (63.41s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-500038
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-500038-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-500038-m02 --driver=docker  --container-runtime=crio: exit status 14 (118.567022ms)

                                                
                                                
-- stdout --
	* [multinode-500038-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-500038-m02' is duplicated with machine name 'multinode-500038-m02' in profile 'multinode-500038'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-500038-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-500038-m03 --driver=docker  --container-runtime=crio: (34.242817419s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-500038
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-500038: exit status 80 (352.266828ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-500038 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-500038-m03 already exists in multinode-500038-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-500038-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-500038-m03: (2.001814108s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.77s)

                                                
                                    
x
+
TestPreload (128.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-104682 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0920 20:14:26.005259  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-104682 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.37359663s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-104682 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-104682 image pull gcr.io/k8s-minikube/busybox: (3.20275541s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-104682
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-104682: (5.808754493s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-104682 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-104682 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.240455112s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-104682 image list
helpers_test.go:175: Cleaning up "test-preload-104682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-104682
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-104682: (2.438591285s)
--- PASS: TestPreload (128.35s)

                                                
                                    
x
+
TestScheduledStopUnix (106.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-831432 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-831432 --memory=2048 --driver=docker  --container-runtime=crio: (29.099291833s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-831432 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-831432 -n scheduled-stop-831432
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-831432 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 20:16:51.273816  719734 retry.go:31] will retry after 114.428µs: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.274287  719734 retry.go:31] will retry after 124.084µs: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.275423  719734 retry.go:31] will retry after 188.146µs: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.276595  719734 retry.go:31] will retry after 257.754µs: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.277771  719734 retry.go:31] will retry after 712.932µs: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.278923  719734 retry.go:31] will retry after 725.684µs: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.280158  719734 retry.go:31] will retry after 915.174µs: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.282405  719734 retry.go:31] will retry after 1.124878ms: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.284658  719734 retry.go:31] will retry after 2.966763ms: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.287929  719734 retry.go:31] will retry after 5.335025ms: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.294200  719734 retry.go:31] will retry after 4.336881ms: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.299480  719734 retry.go:31] will retry after 10.694027ms: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.310741  719734 retry.go:31] will retry after 17.390063ms: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.328997  719734 retry.go:31] will retry after 29.001863ms: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
I0920 20:16:51.358298  719734 retry.go:31] will retry after 21.555953ms: open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/scheduled-stop-831432/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-831432 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-831432 -n scheduled-stop-831432
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-831432
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-831432 --schedule 15s
E0920 20:17:30.508904  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-831432
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-831432: exit status 7 (74.339646ms)

                                                
                                                
-- stdout --
	scheduled-stop-831432
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-831432 -n scheduled-stop-831432
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-831432 -n scheduled-stop-831432: exit status 7 (75.351809ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-831432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-831432
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-831432: (5.308561395s)
--- PASS: TestScheduledStopUnix (106.02s)

                                                
                                    
x
+
TestInsufficientStorage (11.2s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-001181 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-001181 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.634590461s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b6208168-70fb-4ede-9527-6e6ea467ce37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-001181] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"515364ee-11ae-4753-97c5-2ae821b8405b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"930b9305-7825-443d-abba-daaa32e1a77c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"07af3a94-6430-46e6-a6e6-21f01c17272a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig"}}
	{"specversion":"1.0","id":"b10d389d-aab6-4d4c-a61f-f15a8d6fad68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube"}}
	{"specversion":"1.0","id":"15308353-1ea8-4765-8514-ee7077d77965","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"dd4744d1-9640-400a-a8d5-1f25a6e0be1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1f4401fe-12e8-41c1-8420-5a29b151e213","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"92c2a657-147b-49de-b9c2-2683f3dbb651","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"11b11239-3a70-4a1e-a6b4-10b7b0e472c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"59751da4-7186-4f06-b05b-c22e682de586","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"73eb8343-0899-45d5-8515-0405d4613b19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-001181\" primary control-plane node in \"insufficient-storage-001181\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbd0f938-faab-42dd-b60d-af7d13759c5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4de9ca6f-87d3-4653-9f58-29ca1b93c3e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"922e0915-7cc4-4564-a60a-cd101ba7074a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-001181 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-001181 --output=json --layout=cluster: exit status 7 (312.155334ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-001181","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-001181","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 20:18:16.595850  860654 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-001181" does not appear in /home/jenkins/minikube-integration/19678-712952/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-001181 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-001181 --output=json --layout=cluster: exit status 7 (307.135309ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-001181","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-001181","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 20:18:16.900567  860715 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-001181" does not appear in /home/jenkins/minikube-integration/19678-712952/kubeconfig
	E0920 20:18:16.911379  860715 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/insufficient-storage-001181/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-001181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-001181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-001181: (1.943487356s)
--- PASS: TestInsufficientStorage (11.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (65.08s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.960405919 start -p running-upgrade-339551 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.960405919 start -p running-upgrade-339551 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.919592439s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-339551 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-339551 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.173895763s)
helpers_test.go:175: Cleaning up "running-upgrade-339551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-339551
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-339551: (3.26976319s)
--- PASS: TestRunningBinaryUpgrade (65.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (398.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-022418 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-022418 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m13.278006803s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-022418
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-022418: (1.919091818s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-022418 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-022418 status --format={{.Host}}: exit status 7 (95.600536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-022418 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-022418 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m41.882922249s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-022418 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-022418 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-022418 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (254.392458ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-022418] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-022418
	    minikube start -p kubernetes-upgrade-022418 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0224182 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-022418 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-022418 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-022418 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.078377847s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-022418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-022418
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-022418: (2.559597015s)
--- PASS: TestKubernetesUpgrade (398.20s)

                                                
                                    
x
+
TestMissingContainerUpgrade (171.49s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1370168880 start -p missing-upgrade-381674 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1370168880 start -p missing-upgrade-381674 --memory=2200 --driver=docker  --container-runtime=crio: (1m27.710610132s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-381674
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-381674: (10.417139208s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-381674
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-381674 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-381674 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m8.499434219s)
helpers_test.go:175: Cleaning up "missing-upgrade-381674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-381674
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-381674: (2.8765979s)
--- PASS: TestMissingContainerUpgrade (171.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-789260 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-789260 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (91.155432ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-789260] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-789260 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-789260 --driver=docker  --container-runtime=crio: (43.110041736s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-789260 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-789260 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-789260 --no-kubernetes --driver=docker  --container-runtime=crio: (5.600118512s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-789260 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-789260 status -o json: exit status 2 (373.504426ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-789260","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-789260
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-789260: (2.144244893s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-789260 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-789260 --no-kubernetes --driver=docker  --container-runtime=crio: (9.504086098s)
--- PASS: TestNoKubernetes/serial/Start (9.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-789260 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-789260 "sudo systemctl is-active --quiet service kubelet": exit status 1 (348.768193ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-789260
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-789260: (1.283112905s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-789260 --driver=docker  --container-runtime=crio
E0920 20:19:26.005093  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-789260 --driver=docker  --container-runtime=crio: (8.436117265s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-789260 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-789260 "sudo systemctl is-active --quiet service kubelet": exit status 1 (340.995917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (78.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2252232937 start -p stopped-upgrade-479494 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2252232937 start -p stopped-upgrade-479494 --memory=2200 --vm-driver=docker  --container-runtime=crio: (36.772908131s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2252232937 -p stopped-upgrade-479494 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2252232937 -p stopped-upgrade-479494 stop: (2.743997354s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-479494 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0920 20:22:29.074223  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-479494 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.087046508s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (78.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-479494
E0920 20:22:30.510318  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-479494: (1.544618032s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.54s)

                                                
                                    
x
+
TestPause/serial/Start (84.37s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-707124 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0920 20:24:26.008871  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-707124 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.372439659s)
--- PASS: TestPause/serial/Start (84.37s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-707124 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-707124 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.386704113s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.42s)

                                                
                                    
x
+
TestPause/serial/Pause (1.04s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-707124 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-707124 --alsologtostderr -v=5: (1.036565758s)
--- PASS: TestPause/serial/Pause (1.04s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-707124 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-707124 --output=json --layout=cluster: exit status 2 (555.481077ms)

                                                
                                                
-- stdout --
	{"Name":"pause-707124","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-707124","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.56s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.21s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-707124 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-707124 --alsologtostderr -v=5: (1.208047814s)
--- PASS: TestPause/serial/Unpause (1.21s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.35s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-707124 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-707124 --alsologtostderr -v=5: (1.354654687s)
--- PASS: TestPause/serial/PauseAgain (1.35s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-707124 --alsologtostderr -v=5
E0920 20:25:33.581961  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-707124 --alsologtostderr -v=5: (3.84939796s)
--- PASS: TestPause/serial/DeletePaused (3.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.67s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (15.620205669s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-707124
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-707124: exit status 1 (17.237061ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-707124: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-013090 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-013090 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (279.724219ms)

                                                
                                                
-- stdout --
	* [false-013090] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 20:26:16.504442  900427 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:26:16.504733  900427 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:26:16.504763  900427 out.go:358] Setting ErrFile to fd 2...
	I0920 20:26:16.504782  900427 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:26:16.505101  900427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-712952/.minikube/bin
	I0920 20:26:16.505643  900427 out.go:352] Setting JSON to false
	I0920 20:26:16.506837  900427 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14925,"bootTime":1726849051,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0920 20:26:16.506962  900427 start.go:139] virtualization:  
	I0920 20:26:16.510996  900427 out.go:177] * [false-013090] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 20:26:16.514709  900427 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 20:26:16.514795  900427 notify.go:220] Checking for updates...
	I0920 20:26:16.519163  900427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:26:16.522591  900427 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-712952/kubeconfig
	I0920 20:26:16.525177  900427 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-712952/.minikube
	I0920 20:26:16.527964  900427 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 20:26:16.530810  900427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 20:26:16.534057  900427 config.go:182] Loaded profile config "force-systemd-flag-188929": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 20:26:16.534250  900427 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:26:16.558733  900427 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 20:26:16.558887  900427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:26:16.685312  900427 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 20:26:16.668158417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 20:26:16.685430  900427 docker.go:318] overlay module found
	I0920 20:26:16.688270  900427 out.go:177] * Using the docker driver based on user configuration
	I0920 20:26:16.690944  900427 start.go:297] selected driver: docker
	I0920 20:26:16.690970  900427 start.go:901] validating driver "docker" against <nil>
	I0920 20:26:16.690984  900427 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 20:26:16.694133  900427 out.go:201] 
	W0920 20:26:16.697011  900427 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0920 20:26:16.699750  900427 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-013090 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-013090

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-013090

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-013090

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-013090

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-013090

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-013090

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-013090

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-013090

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-013090

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-013090

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-013090

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-013090" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-013090" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-013090

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-013090"

                                                
                                                
----------------------- debugLogs end: false-013090 [took: 4.930529583s] --------------------------------
helpers_test.go:175: Cleaning up "false-013090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-013090
--- PASS: TestNetworkPlugins/group/false (5.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (162.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-390700 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0920 20:29:26.003630  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-390700 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m42.881519708s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (162.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-390700 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [48bdcd8c-8c77-4813-b44a-898943f13bbd] Pending
helpers_test.go:344: "busybox" [48bdcd8c-8c77-4813-b44a-898943f13bbd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [48bdcd8c-8c77-4813-b44a-898943f13bbd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005265034s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-390700 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-113980 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-113980 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m21.703920223s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-390700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-390700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.680860011s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-390700 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-390700 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-390700 --alsologtostderr -v=3: (12.564269419s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-390700 -n old-k8s-version-390700
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-390700 -n old-k8s-version-390700: exit status 7 (112.314046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-390700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (149.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-390700 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-390700 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m29.03835463s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-390700 -n old-k8s-version-390700
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (149.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-113980 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [774058c0-c8c9-4afa-b8b7-1ac4e38474c3] Pending
helpers_test.go:344: "busybox" [774058c0-c8c9-4afa-b8b7-1ac4e38474c3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [774058c0-c8c9-4afa-b8b7-1ac4e38474c3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003850571s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-113980 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-113980 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-113980 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.047906152s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-113980 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-113980 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-113980 --alsologtostderr -v=3: (12.027907711s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-113980 -n embed-certs-113980
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-113980 -n embed-certs-113980: exit status 7 (98.179762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-113980 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (300.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-113980 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 20:32:30.508116  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-113980 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (5m0.173806935s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-113980 -n embed-certs-113980
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (300.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-rghrq" [b2ca5b46-f80d-4aca-8d13-dacbcf2f6cf1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005387847s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-rghrq" [b2ca5b46-f80d-4aca-8d13-dacbcf2f6cf1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003763823s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-390700 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-390700 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-390700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-390700 -n old-k8s-version-390700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-390700 -n old-k8s-version-390700: exit status 2 (334.624024ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-390700 -n old-k8s-version-390700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-390700 -n old-k8s-version-390700: exit status 2 (333.829451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-390700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-390700 -n old-k8s-version-390700
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-390700 -n old-k8s-version-390700
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-443188 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 20:34:26.005188  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-443188 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m7.512423918s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-443188 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e9d79b58-139b-4c58-ac8c-4b433d02a513] Pending
helpers_test.go:344: "busybox" [e9d79b58-139b-4c58-ac8c-4b433d02a513] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e9d79b58-139b-4c58-ac8c-4b433d02a513] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.015785879s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-443188 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-443188 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-443188 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.11221066s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-443188 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-443188 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-443188 --alsologtostderr -v=3: (12.044103558s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-443188 -n no-preload-443188
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-443188 -n no-preload-443188: exit status 7 (69.024441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-443188 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (282.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-443188 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 20:35:32.176310  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:32.182805  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:32.194242  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:32.215691  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:32.257133  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:32.338640  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:32.500144  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:32.821618  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:33.463309  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:34.744876  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:37.306144  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:42.430795  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:35:52.672879  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:36:13.154991  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:36:54.116927  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-443188 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m41.659062762s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-443188 -n no-preload-443188
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (282.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hp57n" [9df7b9e0-7b98-48a8-9080-cee3a3354b51] Running
E0920 20:37:30.507440  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004715488s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hp57n" [9df7b9e0-7b98-48a8-9080-cee3a3354b51] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004626885s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-113980 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-113980 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-113980 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-113980 -n embed-certs-113980
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-113980 -n embed-certs-113980: exit status 2 (330.582486ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-113980 -n embed-certs-113980
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-113980 -n embed-certs-113980: exit status 2 (343.448546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-113980 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-113980 -n embed-certs-113980
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-113980 -n embed-certs-113980
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-304399 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 20:38:16.038507  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-304399 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m16.991167456s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-304399 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9c16e0d3-4a44-4446-b110-40d12b501dfa] Pending
helpers_test.go:344: "busybox" [9c16e0d3-4a44-4446-b110-40d12b501dfa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9c16e0d3-4a44-4446-b110-40d12b501dfa] Running
E0920 20:39:09.075528  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004380217s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-304399 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-304399 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-304399 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.093152258s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-304399 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-304399 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-304399 --alsologtostderr -v=3: (11.947122171s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-304399 -n default-k8s-diff-port-304399
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-304399 -n default-k8s-diff-port-304399: exit status 7 (73.662268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-304399 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-304399 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 20:39:26.003511  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-304399 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m27.957598819s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-304399 -n default-k8s-diff-port-304399
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-tn4jh" [4fbc1aa4-5582-418c-a7a4-b9b950bd3826] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004249774s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-tn4jh" [4fbc1aa4-5582-418c-a7a4-b9b950bd3826] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004560282s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-443188 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-443188 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-443188 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-443188 -n no-preload-443188
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-443188 -n no-preload-443188: exit status 2 (365.355397ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-443188 -n no-preload-443188
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-443188 -n no-preload-443188: exit status 2 (374.406171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-443188 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-443188 -n no-preload-443188
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-443188 -n no-preload-443188
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-780883 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 20:40:32.176317  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-780883 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (35.834199275s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-780883 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-780883 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.044724769s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-780883 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-780883 --alsologtostderr -v=3: (1.257755215s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-780883 -n newest-cni-780883
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-780883 -n newest-cni-780883: exit status 7 (90.057049ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-780883 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-780883 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 20:40:59.880029  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-780883 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (16.773460259s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-780883 -n newest-cni-780883
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-780883 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-780883 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-780883 -n newest-cni-780883
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-780883 -n newest-cni-780883: exit status 2 (358.082347ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-780883 -n newest-cni-780883
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-780883 -n newest-cni-780883: exit status 2 (355.43111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-780883 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-780883 -n newest-cni-780883
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-780883 -n newest-cni-780883
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0920 20:42:13.584280  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:42:30.507884  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m17.651399837s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-013090 "pgrep -a kubelet"
I0920 20:42:34.438935  719734 config.go:182] Loaded profile config "auto-013090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-013090 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n8jzx" [07b734ce-66ff-4932-b8b4-7f698d131db4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-n8jzx" [07b734ce-66ff-4932-b8b4-7f698d131db4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004231219s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-013090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m21.431826599s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bwvdm" [03243f0e-18be-49fc-9825-bda38ca252a0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004282996s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bwvdm" [03243f0e-18be-49fc-9825-bda38ca252a0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003623536s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-304399 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-304399 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-304399 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-304399 -n default-k8s-diff-port-304399
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-304399 -n default-k8s-diff-port-304399: exit status 2 (336.463984ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-304399 -n default-k8s-diff-port-304399
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-304399 -n default-k8s-diff-port-304399: exit status 2 (331.827957ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-304399 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-304399 -n default-k8s-diff-port-304399
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-304399 -n default-k8s-diff-port-304399
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.23s)
E0920 20:48:56.626533  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/auto-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:48:59.692804  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/default-k8s-diff-port-304399/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:48:59.699338  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/default-k8s-diff-port-304399/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:48:59.710832  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/default-k8s-diff-port-304399/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:48:59.732377  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/default-k8s-diff-port-304399/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:48:59.773807  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/default-k8s-diff-port-304399/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:48:59.855235  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/default-k8s-diff-port-304399/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:00.017131  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/default-k8s-diff-port-304399/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:00.340358  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/default-k8s-diff-port-304399/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:00.983428  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/default-k8s-diff-port-304399/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:02.265208  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/default-k8s-diff-port-304399/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:04.827116  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/default-k8s-diff-port-304399/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:09.948468  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/default-k8s-diff-port-304399/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:20.190300  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/default-k8s-diff-port-304399/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0920 20:44:26.006275  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m7.581400212s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-cbkbr" [16a53bde-548a-4b2a-b005-47436171d17b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003861449s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-013090 "pgrep -a kubelet"
I0920 20:44:34.257563  719734 config.go:182] Loaded profile config "kindnet-013090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-013090 replace --force -f testdata/netcat-deployment.yaml
I0920 20:44:34.628129  719734 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4z7fb" [d9d24bb0-f5fd-4815-9d89-deb490c10bbb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4z7fb" [d9d24bb0-f5fd-4815-9d89-deb490c10bbb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.022372068s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-013090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (57.78561611s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-p5t4d" [4db8524a-b986-45c2-b76b-d22a920b0cfd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005453503s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-013090 "pgrep -a kubelet"
I0920 20:45:23.009579  719734 config.go:182] Loaded profile config "calico-013090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-013090 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sjtbl" [33877030-0231-44ac-969c-63bb531002e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sjtbl" [33877030-0231-44ac-969c-63bb531002e6] Running
E0920 20:45:32.176106  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/old-k8s-version-390700/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:45:32.976969  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/no-preload-443188/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005304198s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-013090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m23.036183786s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-013090 "pgrep -a kubelet"
I0920 20:46:11.287797  719734 config.go:182] Loaded profile config "custom-flannel-013090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-013090 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-v4cgq" [5b0c4e2f-7908-4db4-962c-7c33b8299121] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 20:46:13.938849  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/no-preload-443188/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-v4cgq" [5b0c4e2f-7908-4db4-962c-7c33b8299121] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003945039s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-013090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (53.816834651s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-013090 "pgrep -a kubelet"
I0920 20:47:25.576549  719734 config.go:182] Loaded profile config "enable-default-cni-013090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-013090 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sxsqj" [1f51f3e7-f8f3-4f38-a59d-9376658f1b73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 20:47:30.507837  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/functional-539812/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-sxsqj" [1f51f3e7-f8f3-4f38-a59d-9376658f1b73] Running
E0920 20:47:34.683660  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/auto-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:47:34.690126  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/auto-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:47:34.701491  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/auto-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:47:34.724320  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/auto-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:47:34.765706  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/auto-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:47:34.847217  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/auto-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:47:35.009939  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/auto-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:47:35.332168  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/auto-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:47:35.861425  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/no-preload-443188/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:47:35.974231  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/auto-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:47:37.255879  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/auto-013090/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00389957s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-013090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4bsp9" [415ced46-f6cc-45dc-9e3e-cdbe04490387] Running
E0920 20:47:44.939251  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/auto-013090/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00511352s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-013090 "pgrep -a kubelet"
I0920 20:47:49.300982  719734 config.go:182] Loaded profile config "flannel-013090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-013090 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gvjw2" [bf0064be-01c7-41a8-8850-7dc6a27fa067] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 20:47:55.180873  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/auto-013090/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-gvjw2" [bf0064be-01c7-41a8-8850-7dc6a27fa067] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.028231918s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-013090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-013090 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m22.986432277s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-013090 "pgrep -a kubelet"
I0920 20:49:24.043712  719734 config.go:182] Loaded profile config "bridge-013090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-013090 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6jrz4" [b3fc54dd-bfc9-4153-817a-966c207b5c88] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 20:49:26.005364  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/addons-244316/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:27.838845  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/kindnet-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:27.846208  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/kindnet-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:27.857820  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/kindnet-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:27.879208  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/kindnet-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:27.921026  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/kindnet-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:28.002567  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/kindnet-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:28.165678  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/kindnet-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:28.487580  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/kindnet-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:29.129760  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/kindnet-013090/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-6jrz4" [b3fc54dd-bfc9-4153-817a-966c207b5c88] Running
E0920 20:49:30.411653  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/kindnet-013090/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:49:32.973901  719734 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-712952/.minikube/profiles/kindnet-013090/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.0044968s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-013090 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-013090 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (29/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-394536 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-394536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-394536
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-917855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-917855
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-013090 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-013090

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-013090

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-013090

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-013090

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-013090

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-013090

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-013090

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-013090

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-013090

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-013090

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-013090

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-013090" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-013090" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-013090

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-013090"

                                                
                                                
----------------------- debugLogs end: kubenet-013090 [took: 4.331488944s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-013090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-013090
--- SKIP: TestNetworkPlugins/group/kubenet (4.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-013090 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-013090" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-013090

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-013090" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013090"

                                                
                                                
----------------------- debugLogs end: cilium-013090 [took: 5.563953942s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-013090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-013090
--- SKIP: TestNetworkPlugins/group/cilium (5.78s)

                                                
                                    
Copied to clipboard