Test Report: Docker_Linux_crio_arm64 19575

                    
                      7bfa33b863353ea74c2dd2110cc17945d6c51e0f:2024-09-04:36080
                    
                

Test fail (5/328)

Order failed test Duration
33 TestAddons/parallel/Registry 73.84
34 TestAddons/parallel/Ingress 150.45
36 TestAddons/parallel/MetricsServer 363.14
111 TestFunctional/parallel/License 0.24
174 TestMultiControlPlane/serial/RestartCluster 129.64
x
+
TestAddons/parallel/Registry (73.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.812743ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-q2v5x" [08b3698e-ab89-4393-846c-c4d5984ebe9e] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004561345s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xfn95" [19eda952-0370-4c89-ad9f-fa2fcf34e855] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003993875s
addons_test.go:342: (dbg) Run:  kubectl --context addons-057989 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-057989 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-057989 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.118431816s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-057989 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 ip
2024/09/04 20:47:38 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-057989
helpers_test.go:235: (dbg) docker inspect addons-057989:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320",
	        "Created": "2024-09-04T20:34:52.925359137Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 717234,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-04T20:34:53.081030632Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8411aacd61cb8f2a7ae48c92e2c9e76ad4dd701b3dba8b30393c5cc31fbd2b15",
	        "ResolvConfPath": "/var/lib/docker/containers/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320/hostname",
	        "HostsPath": "/var/lib/docker/containers/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320/hosts",
	        "LogPath": "/var/lib/docker/containers/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320-json.log",
	        "Name": "/addons-057989",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-057989:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-057989",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cd850bee504de257b138e2fb3f5056955979d35717e7f20016e6d26d978366be-init/diff:/var/lib/docker/overlay2/e164f50a1bfe4541271ed61a6ed23c33b9aae141da805b23620713759476fde0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd850bee504de257b138e2fb3f5056955979d35717e7f20016e6d26d978366be/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd850bee504de257b138e2fb3f5056955979d35717e7f20016e6d26d978366be/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd850bee504de257b138e2fb3f5056955979d35717e7f20016e6d26d978366be/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-057989",
	                "Source": "/var/lib/docker/volumes/addons-057989/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-057989",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-057989",
	                "name.minikube.sigs.k8s.io": "addons-057989",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7817bba0f58403312356fc6c068a6420e3474d973c3bf9f9708656d8c06482b",
	            "SandboxKey": "/var/run/docker/netns/e7817bba0f58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-057989": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3dc03972bd677b6f27e0f7eb6bf3c869f01a326f25eec49d8a8d16973aa42236",
	                    "EndpointID": "3f89f8bd76b0eaeb20e7ece98c8b5534a50c35ccfbd1872e98138f979cab06b1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-057989",
	                        "73a9bf226299"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-057989 -n addons-057989
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-057989 logs -n 25: (1.585969144s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-729266   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | -p download-only-729266              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| delete  | -p download-only-729266              | download-only-729266   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| start   | -o=json --download-only              | download-only-110365   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | -p download-only-110365              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| delete  | -p download-only-110365              | download-only-110365   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| delete  | -p download-only-729266              | download-only-729266   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| delete  | -p download-only-110365              | download-only-110365   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| start   | --download-only -p                   | download-docker-053885 | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | download-docker-053885               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-053885            | download-docker-053885 | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| start   | --download-only -p                   | binary-mirror-435820   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | binary-mirror-435820                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40553               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-435820              | binary-mirror-435820   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| addons  | enable dashboard -p                  | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | addons-057989                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | addons-057989                        |                        |         |         |                     |                     |
	| start   | -p addons-057989 --wait=true         | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:38 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-057989 addons                 | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-057989 addons                 | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-057989 ip                     | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	| addons  | addons-057989 addons disable         | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 20:34:26
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 20:34:26.364635  716742 out.go:345] Setting OutFile to fd 1 ...
	I0904 20:34:26.364772  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:34:26.364783  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:34:26.364788  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:34:26.365015  716742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
	I0904 20:34:26.365465  716742 out.go:352] Setting JSON to false
	I0904 20:34:26.366331  716742 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15417,"bootTime":1725466650,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0904 20:34:26.366402  716742 start.go:139] virtualization:  
	I0904 20:34:26.368474  716742 out.go:177] * [addons-057989] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0904 20:34:26.370910  716742 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 20:34:26.371038  716742 notify.go:220] Checking for updates...
	I0904 20:34:26.374838  716742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:34:26.376708  716742 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 20:34:26.378539  716742 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	I0904 20:34:26.380267  716742 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0904 20:34:26.382309  716742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 20:34:26.384843  716742 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 20:34:26.407170  716742 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0904 20:34:26.407296  716742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:34:26.468251  716742 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-04 20:34:26.458330655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 20:34:26.468366  716742 docker.go:307] overlay module found
	I0904 20:34:26.470430  716742 out.go:177] * Using the docker driver based on user configuration
	I0904 20:34:26.472360  716742 start.go:297] selected driver: docker
	I0904 20:34:26.472375  716742 start.go:901] validating driver "docker" against <nil>
	I0904 20:34:26.472388  716742 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 20:34:26.473037  716742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:34:26.545537  716742 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-04 20:34:26.534442525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 20:34:26.545707  716742 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 20:34:26.546029  716742 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:34:26.547715  716742 out.go:177] * Using Docker driver with root privileges
	I0904 20:34:26.549443  716742 cni.go:84] Creating CNI manager for ""
	I0904 20:34:26.549486  716742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:34:26.549498  716742 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 20:34:26.549644  716742 start.go:340] cluster config:
	{Name:addons-057989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0904 20:34:26.551789  716742 out.go:177] * Starting "addons-057989" primary control-plane node in "addons-057989" cluster
	I0904 20:34:26.553334  716742 cache.go:121] Beginning downloading kic base image for docker with crio
	I0904 20:34:26.554936  716742 out.go:177] * Pulling base image v0.0.45 ...
	I0904 20:34:26.556687  716742 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 20:34:26.556763  716742 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0904 20:34:26.556783  716742 cache.go:56] Caching tarball of preloaded images
	I0904 20:34:26.556881  716742 preload.go:172] Found /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0904 20:34:26.556903  716742 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0904 20:34:26.557407  716742 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/config.json ...
	I0904 20:34:26.557448  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/config.json: {Name:mk4c159eebe676425fef59d6562583fda185ed7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:26.557673  716742 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0904 20:34:26.576982  716742 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0904 20:34:26.577098  716742 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0904 20:34:26.577129  716742 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0904 20:34:26.577146  716742 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0904 20:34:26.577161  716742 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0904 20:34:26.577168  716742 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from local cache
	I0904 20:34:44.336400  716742 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from cached tarball
	I0904 20:34:44.336436  716742 cache.go:194] Successfully downloaded all kic artifacts
	I0904 20:34:44.336481  716742 start.go:360] acquireMachinesLock for addons-057989: {Name:mk0970b3a3d59ebd1c006a89f39ceb89ec07a595 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 20:34:44.337080  716742 start.go:364] duration metric: took 571.787µs to acquireMachinesLock for "addons-057989"
	I0904 20:34:44.337123  716742 start.go:93] Provisioning new machine with config: &{Name:addons-057989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:34:44.337224  716742 start.go:125] createHost starting for "" (driver="docker")
	I0904 20:34:44.340059  716742 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0904 20:34:44.340312  716742 start.go:159] libmachine.API.Create for "addons-057989" (driver="docker")
	I0904 20:34:44.340356  716742 client.go:168] LocalClient.Create starting
	I0904 20:34:44.340489  716742 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem
	I0904 20:34:45.869727  716742 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem
	I0904 20:34:46.857527  716742 cli_runner.go:164] Run: docker network inspect addons-057989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 20:34:46.873370  716742 cli_runner.go:211] docker network inspect addons-057989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 20:34:46.873460  716742 network_create.go:284] running [docker network inspect addons-057989] to gather additional debugging logs...
	I0904 20:34:46.873484  716742 cli_runner.go:164] Run: docker network inspect addons-057989
	W0904 20:34:46.888882  716742 cli_runner.go:211] docker network inspect addons-057989 returned with exit code 1
	I0904 20:34:46.888915  716742 network_create.go:287] error running [docker network inspect addons-057989]: docker network inspect addons-057989: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-057989 not found
	I0904 20:34:46.888929  716742 network_create.go:289] output of [docker network inspect addons-057989]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-057989 not found
	
	** /stderr **
	I0904 20:34:46.889032  716742 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 20:34:46.906072  716742 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017d4850}
	I0904 20:34:46.906116  716742 network_create.go:124] attempt to create docker network addons-057989 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0904 20:34:46.906182  716742 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-057989 addons-057989
	I0904 20:34:46.976444  716742 network_create.go:108] docker network addons-057989 192.168.49.0/24 created
	I0904 20:34:46.976479  716742 kic.go:121] calculated static IP "192.168.49.2" for the "addons-057989" container
	I0904 20:34:46.976555  716742 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 20:34:46.992316  716742 cli_runner.go:164] Run: docker volume create addons-057989 --label name.minikube.sigs.k8s.io=addons-057989 --label created_by.minikube.sigs.k8s.io=true
	I0904 20:34:47.012952  716742 oci.go:103] Successfully created a docker volume addons-057989
	I0904 20:34:47.013072  716742 cli_runner.go:164] Run: docker run --rm --name addons-057989-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-057989 --entrypoint /usr/bin/test -v addons-057989:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib
	I0904 20:34:48.615772  716742 cli_runner.go:217] Completed: docker run --rm --name addons-057989-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-057989 --entrypoint /usr/bin/test -v addons-057989:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib: (1.602654612s)
	I0904 20:34:48.615806  716742 oci.go:107] Successfully prepared a docker volume addons-057989
	I0904 20:34:48.615827  716742 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 20:34:48.615846  716742 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 20:34:48.615942  716742 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-057989:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir
	I0904 20:34:52.860186  716742 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-057989:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir: (4.244202783s)
	I0904 20:34:52.860217  716742 kic.go:203] duration metric: took 4.244368465s to extract preloaded images to volume ...
	W0904 20:34:52.860378  716742 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 20:34:52.860496  716742 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 20:34:52.910765  716742 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-057989 --name addons-057989 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-057989 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-057989 --network addons-057989 --ip 192.168.49.2 --volume addons-057989:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85
	I0904 20:34:53.252893  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Running}}
	I0904 20:34:53.272312  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:34:53.296422  716742 cli_runner.go:164] Run: docker exec addons-057989 stat /var/lib/dpkg/alternatives/iptables
	I0904 20:34:53.389330  716742 oci.go:144] the created container "addons-057989" has a running status.
	I0904 20:34:53.389362  716742 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa...
	I0904 20:34:54.130907  716742 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 20:34:54.153369  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:34:54.171584  716742 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 20:34:54.171605  716742 kic_runner.go:114] Args: [docker exec --privileged addons-057989 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 20:34:54.258829  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:34:54.279547  716742 machine.go:93] provisionDockerMachine start ...
	I0904 20:34:54.279655  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:54.304920  716742 main.go:141] libmachine: Using SSH client type: native
	I0904 20:34:54.305248  716742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0904 20:34:54.305259  716742 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 20:34:54.430287  716742 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-057989
	
	I0904 20:34:54.430311  716742 ubuntu.go:169] provisioning hostname "addons-057989"
	I0904 20:34:54.430389  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:54.451013  716742 main.go:141] libmachine: Using SSH client type: native
	I0904 20:34:54.451268  716742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0904 20:34:54.451286  716742 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-057989 && echo "addons-057989" | sudo tee /etc/hostname
	I0904 20:34:54.595269  716742 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-057989
	
	I0904 20:34:54.595355  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:54.613079  716742 main.go:141] libmachine: Using SSH client type: native
	I0904 20:34:54.613362  716742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0904 20:34:54.613384  716742 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-057989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-057989/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-057989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 20:34:54.733925  716742 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 20:34:54.734009  716742 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19575-710603/.minikube CaCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19575-710603/.minikube}
	I0904 20:34:54.734036  716742 ubuntu.go:177] setting up certificates
	I0904 20:34:54.734046  716742 provision.go:84] configureAuth start
	I0904 20:34:54.734112  716742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-057989
	I0904 20:34:54.752741  716742 provision.go:143] copyHostCerts
	I0904 20:34:54.752830  716742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem (1082 bytes)
	I0904 20:34:54.752951  716742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem (1123 bytes)
	I0904 20:34:54.753017  716742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem (1675 bytes)
	I0904 20:34:54.753069  716742 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem org=jenkins.addons-057989 san=[127.0.0.1 192.168.49.2 addons-057989 localhost minikube]
	I0904 20:34:55.147333  716742 provision.go:177] copyRemoteCerts
	I0904 20:34:55.147404  716742 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 20:34:55.147447  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.165454  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:55.255682  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 20:34:55.281142  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 20:34:55.305289  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 20:34:55.329497  716742 provision.go:87] duration metric: took 595.436326ms to configureAuth
	I0904 20:34:55.329576  716742 ubuntu.go:193] setting minikube options for container-runtime
	I0904 20:34:55.329784  716742 config.go:182] Loaded profile config "addons-057989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 20:34:55.329932  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.346253  716742 main.go:141] libmachine: Using SSH client type: native
	I0904 20:34:55.346495  716742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0904 20:34:55.346516  716742 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 20:34:55.565686  716742 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 20:34:55.565710  716742 machine.go:96] duration metric: took 1.286141461s to provisionDockerMachine
	I0904 20:34:55.565720  716742 client.go:171] duration metric: took 11.225352854s to LocalClient.Create
	I0904 20:34:55.565732  716742 start.go:167] duration metric: took 11.225421054s to libmachine.API.Create "addons-057989"
	I0904 20:34:55.565740  716742 start.go:293] postStartSetup for "addons-057989" (driver="docker")
	I0904 20:34:55.565751  716742 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 20:34:55.565817  716742 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 20:34:55.565881  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.583171  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:55.671214  716742 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 20:34:55.674581  716742 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 20:34:55.674617  716742 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 20:34:55.674629  716742 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 20:34:55.674636  716742 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0904 20:34:55.674651  716742 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-710603/.minikube/addons for local assets ...
	I0904 20:34:55.674722  716742 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-710603/.minikube/files for local assets ...
	I0904 20:34:55.674750  716742 start.go:296] duration metric: took 109.004783ms for postStartSetup
	I0904 20:34:55.675068  716742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-057989
	I0904 20:34:55.690396  716742 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/config.json ...
	I0904 20:34:55.690692  716742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 20:34:55.690748  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.706620  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:55.790685  716742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 20:34:55.795488  716742 start.go:128] duration metric: took 11.458247104s to createHost
	I0904 20:34:55.795510  716742 start.go:83] releasing machines lock for "addons-057989", held for 11.458409136s
	I0904 20:34:55.795590  716742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-057989
	I0904 20:34:55.811993  716742 ssh_runner.go:195] Run: cat /version.json
	I0904 20:34:55.812023  716742 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 20:34:55.812045  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.812092  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.832455  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:55.839481  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:56.100314  716742 ssh_runner.go:195] Run: systemctl --version
	I0904 20:34:56.104670  716742 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 20:34:56.252641  716742 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 20:34:56.256985  716742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 20:34:56.275558  716742 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 20:34:56.275632  716742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 20:34:56.310401  716742 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 20:34:56.310468  716742 start.go:495] detecting cgroup driver to use...
	I0904 20:34:56.310517  716742 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 20:34:56.310578  716742 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 20:34:56.326154  716742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 20:34:56.337266  716742 docker.go:217] disabling cri-docker service (if available) ...
	I0904 20:34:56.337385  716742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 20:34:56.352198  716742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 20:34:56.367450  716742 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 20:34:56.455787  716742 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 20:34:56.554238  716742 docker.go:233] disabling docker service ...
	I0904 20:34:56.554351  716742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 20:34:56.574710  716742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 20:34:56.587825  716742 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 20:34:56.687299  716742 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 20:34:56.786601  716742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 20:34:56.799474  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 20:34:56.817328  716742 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0904 20:34:56.817397  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.827886  716742 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 20:34:56.828012  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.838976  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.849064  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.859185  716742 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 20:34:56.868303  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.878343  716742 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.894559  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.904955  716742 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 20:34:56.914184  716742 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 20:34:56.924030  716742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:34:57.018394  716742 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 20:34:57.139627  716742 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 20:34:57.139769  716742 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 20:34:57.143458  716742 start.go:563] Will wait 60s for crictl version
	I0904 20:34:57.143551  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:34:57.146967  716742 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 20:34:57.187619  716742 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 20:34:57.187782  716742 ssh_runner.go:195] Run: crio --version
	I0904 20:34:57.230327  716742 ssh_runner.go:195] Run: crio --version
	I0904 20:34:57.274907  716742 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0904 20:34:57.276866  716742 cli_runner.go:164] Run: docker network inspect addons-057989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 20:34:57.292471  716742 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 20:34:57.296202  716742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:34:57.307224  716742 kubeadm.go:883] updating cluster {Name:addons-057989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 20:34:57.307355  716742 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 20:34:57.307428  716742 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:34:57.381955  716742 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 20:34:57.381980  716742 crio.go:433] Images already preloaded, skipping extraction
	I0904 20:34:57.382038  716742 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:34:57.418097  716742 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 20:34:57.418121  716742 cache_images.go:84] Images are preloaded, skipping loading
	I0904 20:34:57.418129  716742 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0904 20:34:57.418229  716742 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-057989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 20:34:57.418319  716742 ssh_runner.go:195] Run: crio config
	I0904 20:34:57.464713  716742 cni.go:84] Creating CNI manager for ""
	I0904 20:34:57.464736  716742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:34:57.464747  716742 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 20:34:57.464800  716742 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-057989 NodeName:addons-057989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 20:34:57.464994  716742 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-057989"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 20:34:57.465097  716742 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0904 20:34:57.474398  716742 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 20:34:57.474488  716742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 20:34:57.483198  716742 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0904 20:34:57.501099  716742 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 20:34:57.519783  716742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0904 20:34:57.538347  716742 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0904 20:34:57.541777  716742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:34:57.552363  716742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:34:57.634363  716742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:34:57.648929  716742 certs.go:68] Setting up /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989 for IP: 192.168.49.2
	I0904 20:34:57.648953  716742 certs.go:194] generating shared ca certs ...
	I0904 20:34:57.648969  716742 certs.go:226] acquiring lock for ca certs: {Name:mkc3a04cbc0797b819dd3c9fec2eaef93961640b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:57.649112  716742 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key
	I0904 20:34:58.017005  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt ...
	I0904 20:34:58.017043  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt: {Name:mkd95f3346a423afb0e8673b5e71292af3b74b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:58.017249  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key ...
	I0904 20:34:58.017258  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key: {Name:mk7f9cfa6bde577b19e8374855b89bb733281fb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:58.017337  716742 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key
	I0904 20:34:58.453146  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt ...
	I0904 20:34:58.453179  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt: {Name:mkc81f7ed4f8bfbc83feffd55dc281d29aeb677f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:58.453378  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key ...
	I0904 20:34:58.453392  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key: {Name:mkbe99405547ce66fa15a0dc370e003355394a7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:58.453479  716742 certs.go:256] generating profile certs ...
	I0904 20:34:58.453540  716742 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.key
	I0904 20:34:58.453557  716742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt with IP's: []
	I0904 20:34:59.380258  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt ...
	I0904 20:34:59.380292  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: {Name:mk7c6d25eef31d0f7545d21b444aedd95ab50fe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:59.380484  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.key ...
	I0904 20:34:59.380497  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.key: {Name:mkb45fcd6f95ce4da37194a6bfd862e0659e59dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:59.380587  716742 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key.2b1b1c52
	I0904 20:34:59.380609  716742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt.2b1b1c52 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0904 20:34:59.599935  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt.2b1b1c52 ...
	I0904 20:34:59.599969  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt.2b1b1c52: {Name:mk9bcac5f1d69cf17a003755e1f54f813baa3753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:59.600669  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key.2b1b1c52 ...
	I0904 20:34:59.600690  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key.2b1b1c52: {Name:mkfc8797a4c6c62540408d4ff8b05ec0fca2be8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:59.601327  716742 certs.go:381] copying /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt.2b1b1c52 -> /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt
	I0904 20:34:59.601450  716742 certs.go:385] copying /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key.2b1b1c52 -> /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key
	I0904 20:34:59.601512  716742 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.key
	I0904 20:34:59.601536  716742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.crt with IP's: []
	I0904 20:35:00.752243  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.crt ...
	I0904 20:35:00.752285  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.crt: {Name:mkb03587981395a01bab503d3182ecbc4b34513d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:35:00.752500  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.key ...
	I0904 20:35:00.752523  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.key: {Name:mk4bede3c921b1b8c749a338fcf99d9201d566d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:35:00.752724  716742 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 20:35:00.752773  716742 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem (1082 bytes)
	I0904 20:35:00.752808  716742 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem (1123 bytes)
	I0904 20:35:00.752835  716742 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem (1675 bytes)
	I0904 20:35:00.753523  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 20:35:00.789146  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 20:35:00.819640  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 20:35:00.849973  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 20:35:00.878887  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 20:35:00.907623  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 20:35:00.937179  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 20:35:00.966650  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 20:35:00.994197  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 20:35:01.076673  716742 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 20:35:01.103416  716742 ssh_runner.go:195] Run: openssl version
	I0904 20:35:01.109985  716742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 20:35:01.121951  716742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:35:01.125896  716742 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:35:01.125966  716742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:35:01.133559  716742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 20:35:01.143886  716742 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 20:35:01.147606  716742 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 20:35:01.147661  716742 kubeadm.go:392] StartCluster: {Name:addons-057989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:35:01.147757  716742 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 20:35:01.147846  716742 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 20:35:01.189885  716742 cri.go:89] found id: ""
	I0904 20:35:01.189957  716742 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 20:35:01.199399  716742 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 20:35:01.209262  716742 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0904 20:35:01.209331  716742 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 20:35:01.219268  716742 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 20:35:01.219297  716742 kubeadm.go:157] found existing configuration files:
	
	I0904 20:35:01.219416  716742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 20:35:01.229900  716742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 20:35:01.230022  716742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 20:35:01.239899  716742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 20:35:01.249771  716742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 20:35:01.249888  716742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 20:35:01.259492  716742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 20:35:01.269530  716742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 20:35:01.269617  716742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 20:35:01.278828  716742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 20:35:01.288498  716742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 20:35:01.288597  716742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 20:35:01.297744  716742 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0904 20:35:01.340416  716742 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0904 20:35:01.340765  716742 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 20:35:01.363185  716742 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0904 20:35:01.363359  716742 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0904 20:35:01.363401  716742 kubeadm.go:310] OS: Linux
	I0904 20:35:01.363457  716742 kubeadm.go:310] CGROUPS_CPU: enabled
	I0904 20:35:01.363510  716742 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0904 20:35:01.363559  716742 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0904 20:35:01.363608  716742 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0904 20:35:01.363657  716742 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0904 20:35:01.363717  716742 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0904 20:35:01.363769  716742 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0904 20:35:01.363825  716742 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0904 20:35:01.363876  716742 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0904 20:35:01.428620  716742 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 20:35:01.428815  716742 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 20:35:01.428964  716742 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 20:35:01.438302  716742 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 20:35:01.440900  716742 out.go:235]   - Generating certificates and keys ...
	I0904 20:35:01.441007  716742 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 20:35:01.441111  716742 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 20:35:02.500413  716742 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 20:35:03.216919  716742 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 20:35:03.705499  716742 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 20:35:04.250336  716742 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 20:35:04.593576  716742 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 20:35:04.593731  716742 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-057989 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 20:35:06.225872  716742 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 20:35:06.226139  716742 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-057989 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 20:35:06.469736  716742 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 20:35:07.150890  716742 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 20:35:07.404958  716742 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 20:35:07.405173  716742 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 20:35:07.560068  716742 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 20:35:07.823478  716742 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 20:35:08.117805  716742 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 20:35:08.681466  716742 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 20:35:08.809396  716742 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 20:35:08.810004  716742 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 20:35:08.813080  716742 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 20:35:08.815884  716742 out.go:235]   - Booting up control plane ...
	I0904 20:35:08.815986  716742 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 20:35:08.816061  716742 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 20:35:08.817199  716742 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 20:35:08.827622  716742 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 20:35:08.833681  716742 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 20:35:08.833734  716742 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 20:35:08.922848  716742 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 20:35:08.922965  716742 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 20:35:09.928124  716742 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005060114s
	I0904 20:35:09.928209  716742 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0904 20:35:15.930014  716742 kubeadm.go:310] [api-check] The API server is healthy after 6.002333677s
	I0904 20:35:15.952027  716742 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 20:35:15.967084  716742 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 20:35:15.992846  716742 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 20:35:15.993056  716742 kubeadm.go:310] [mark-control-plane] Marking the node addons-057989 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 20:35:16.042250  716742 kubeadm.go:310] [bootstrap-token] Using token: mex69v.g494u4t2bbxooj6i
	I0904 20:35:16.044971  716742 out.go:235]   - Configuring RBAC rules ...
	I0904 20:35:16.045134  716742 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 20:35:16.055705  716742 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 20:35:16.065010  716742 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 20:35:16.069238  716742 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 20:35:16.073658  716742 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 20:35:16.078420  716742 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 20:35:16.342354  716742 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 20:35:16.769206  716742 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 20:35:17.338932  716742 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 20:35:17.341063  716742 kubeadm.go:310] 
	I0904 20:35:17.341135  716742 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 20:35:17.341141  716742 kubeadm.go:310] 
	I0904 20:35:17.341215  716742 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 20:35:17.341220  716742 kubeadm.go:310] 
	I0904 20:35:17.341244  716742 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 20:35:17.341301  716742 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 20:35:17.341349  716742 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 20:35:17.341354  716742 kubeadm.go:310] 
	I0904 20:35:17.341406  716742 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 20:35:17.341410  716742 kubeadm.go:310] 
	I0904 20:35:17.341456  716742 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 20:35:17.341460  716742 kubeadm.go:310] 
	I0904 20:35:17.341510  716742 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 20:35:17.341581  716742 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 20:35:17.341648  716742 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 20:35:17.341652  716742 kubeadm.go:310] 
	I0904 20:35:17.341733  716742 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 20:35:17.341807  716742 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 20:35:17.341812  716742 kubeadm.go:310] 
	I0904 20:35:17.341912  716742 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mex69v.g494u4t2bbxooj6i \
	I0904 20:35:17.342013  716742 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6a9d6c5dd15cce5623c32315b379ca4db8b8a42e6190c248e6260d57259d6bc7 \
	I0904 20:35:17.342033  716742 kubeadm.go:310] 	--control-plane 
	I0904 20:35:17.342037  716742 kubeadm.go:310] 
	I0904 20:35:17.342119  716742 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 20:35:17.342123  716742 kubeadm.go:310] 
	I0904 20:35:17.342202  716742 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mex69v.g494u4t2bbxooj6i \
	I0904 20:35:17.342301  716742 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6a9d6c5dd15cce5623c32315b379ca4db8b8a42e6190c248e6260d57259d6bc7 
	I0904 20:35:17.347001  716742 kubeadm.go:310] W0904 20:35:01.336876    1192 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0904 20:35:17.347293  716742 kubeadm.go:310] W0904 20:35:01.337821    1192 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0904 20:35:17.347501  716742 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0904 20:35:17.347608  716742 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 20:35:17.347628  716742 cni.go:84] Creating CNI manager for ""
	I0904 20:35:17.347636  716742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:35:17.349758  716742 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0904 20:35:17.351534  716742 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0904 20:35:17.355486  716742 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0904 20:35:17.355512  716742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0904 20:35:17.375340  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 20:35:17.651571  716742 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 20:35:17.651721  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:17.651808  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-057989 minikube.k8s.io/updated_at=2024_09_04T20_35_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af minikube.k8s.io/name=addons-057989 minikube.k8s.io/primary=true
	I0904 20:35:17.806018  716742 ops.go:34] apiserver oom_adj: -16
	I0904 20:35:17.806125  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:18.306818  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:18.807224  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:19.306261  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:19.806958  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:20.307070  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:20.806301  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:21.306858  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:21.806305  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:21.906144  716742 kubeadm.go:1113] duration metric: took 4.254470959s to wait for elevateKubeSystemPrivileges
	I0904 20:35:21.906172  716742 kubeadm.go:394] duration metric: took 20.758516173s to StartCluster
	I0904 20:35:21.906191  716742 settings.go:142] acquiring lock: {Name:mk78ce0fd69886ee058af8e675a61cdabc51cba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:35:21.906305  716742 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 20:35:21.906748  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/kubeconfig: {Name:mk99c3c6b541fdaa941aef3f7a9cb265a3595a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:35:21.906950  716742 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:35:21.907130  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 20:35:21.907400  716742 config.go:182] Loaded profile config "addons-057989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 20:35:21.907438  716742 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0904 20:35:21.907532  716742 addons.go:69] Setting yakd=true in profile "addons-057989"
	I0904 20:35:21.907555  716742 addons.go:234] Setting addon yakd=true in "addons-057989"
	I0904 20:35:21.907606  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.908075  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.908362  716742 addons.go:69] Setting inspektor-gadget=true in profile "addons-057989"
	I0904 20:35:21.908387  716742 addons.go:234] Setting addon inspektor-gadget=true in "addons-057989"
	I0904 20:35:21.908419  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.908817  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.909134  716742 addons.go:69] Setting metrics-server=true in profile "addons-057989"
	I0904 20:35:21.909160  716742 addons.go:234] Setting addon metrics-server=true in "addons-057989"
	I0904 20:35:21.909185  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.909577  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.912812  716742 addons.go:69] Setting cloud-spanner=true in profile "addons-057989"
	I0904 20:35:21.912905  716742 addons.go:234] Setting addon cloud-spanner=true in "addons-057989"
	I0904 20:35:21.912980  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.913489  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913871  716742 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-057989"
	I0904 20:35:21.928862  716742 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-057989"
	I0904 20:35:21.928901  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.929448  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913884  716742 addons.go:69] Setting default-storageclass=true in profile "addons-057989"
	I0904 20:35:21.943849  716742 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-057989"
	I0904 20:35:21.944216  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913889  716742 addons.go:69] Setting gcp-auth=true in profile "addons-057989"
	I0904 20:35:21.955803  716742 mustload.go:65] Loading cluster: addons-057989
	I0904 20:35:21.956040  716742 config.go:182] Loaded profile config "addons-057989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 20:35:21.956384  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913901  716742 addons.go:69] Setting ingress=true in profile "addons-057989"
	I0904 20:35:21.966350  716742 addons.go:234] Setting addon ingress=true in "addons-057989"
	I0904 20:35:21.966464  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.969340  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913908  716742 addons.go:69] Setting ingress-dns=true in profile "addons-057989"
	I0904 20:35:21.971332  716742 addons.go:234] Setting addon ingress-dns=true in "addons-057989"
	I0904 20:35:21.971431  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.971919  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.916107  716742 out.go:177] * Verifying Kubernetes components...
	I0904 20:35:21.999362  716742 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0904 20:35:21.917099  716742 addons.go:69] Setting storage-provisioner=true in profile "addons-057989"
	I0904 20:35:21.917117  716742 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-057989"
	I0904 20:35:21.917124  716742 addons.go:69] Setting registry=true in profile "addons-057989"
	I0904 20:35:22.006635  716742 addons.go:234] Setting addon registry=true in "addons-057989"
	I0904 20:35:21.917132  716742 addons.go:69] Setting volcano=true in profile "addons-057989"
	I0904 20:35:21.917139  716742 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-057989"
	I0904 20:35:22.006816  716742 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-057989"
	I0904 20:35:22.026442  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.917152  716742 addons.go:69] Setting volumesnapshots=true in profile "addons-057989"
	I0904 20:35:22.041769  716742 addons.go:234] Setting addon volumesnapshots=true in "addons-057989"
	I0904 20:35:22.041853  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.042370  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.026070  716742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:35:22.006570  716742 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-057989"
	I0904 20:35:22.052266  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.053017  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.026105  716742 addons.go:234] Setting addon storage-provisioner=true in "addons-057989"
	I0904 20:35:22.060087  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.006737  716742 addons.go:234] Setting addon volcano=true in "addons-057989"
	I0904 20:35:22.062161  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.062789  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.026143  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.068254  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0904 20:35:22.068273  716742 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0904 20:35:22.068332  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.080628  716742 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0904 20:35:22.083423  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.096846  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.117959  716742 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0904 20:35:22.117983  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0904 20:35:22.118049  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.132214  716742 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0904 20:35:22.133366  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.138078  716742 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0904 20:35:22.138539  716742 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0904 20:35:22.138556  716742 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0904 20:35:22.138623  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.166630  716742 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 20:35:22.166698  716742 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 20:35:22.166806  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.187781  716742 addons.go:234] Setting addon default-storageclass=true in "addons-057989"
	I0904 20:35:22.187824  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.190235  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.232774  716742 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0904 20:35:22.247605  716742 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:35:22.247675  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0904 20:35:22.247754  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.264137  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0904 20:35:22.299532  716742 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 20:35:22.266024  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 20:35:22.300947  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	W0904 20:35:22.302072  716742 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0904 20:35:22.306600  716742 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0904 20:35:22.308467  716742 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:35:22.308490  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0904 20:35:22.308561  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.309318  716742 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 20:35:22.313375  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0904 20:35:22.317741  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0904 20:35:22.317922  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0904 20:35:22.317937  716742 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0904 20:35:22.318014  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.330389  716742 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-057989"
	I0904 20:35:22.330438  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.330871  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.340063  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0904 20:35:22.340202  716742 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 20:35:22.340246  716742 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0904 20:35:22.342136  716742 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:35:22.342209  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 20:35:22.342312  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.352340  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0904 20:35:22.352873  716742 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:35:22.352890  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0904 20:35:22.352954  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.359146  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0904 20:35:22.359286  716742 out.go:177]   - Using image docker.io/registry:2.8.3
	I0904 20:35:22.360580  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.364471  716742 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0904 20:35:22.364583  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0904 20:35:22.369625  716742 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0904 20:35:22.369649  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0904 20:35:22.369712  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.374826  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0904 20:35:22.376699  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0904 20:35:22.378465  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0904 20:35:22.378495  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0904 20:35:22.378570  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.406541  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.423195  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.433564  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.450279  716742 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 20:35:22.450300  716742 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 20:35:22.450362  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.497954  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.502029  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.528022  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.537365  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.538274  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.539752  716742 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0904 20:35:22.543162  716742 out.go:177]   - Using image docker.io/busybox:stable
	I0904 20:35:22.544052  716742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:35:22.545019  716742 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:35:22.545046  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0904 20:35:22.545106  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.545573  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.561427  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	W0904 20:35:22.564313  716742 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0904 20:35:22.564341  716742 retry.go:31] will retry after 236.417212ms: ssh: handshake failed: EOF
	I0904 20:35:22.575605  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.736511  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0904 20:35:22.736588  716742 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0904 20:35:22.857432  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0904 20:35:22.895253  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0904 20:35:22.895325  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0904 20:35:22.905358  716742 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0904 20:35:22.905429  716742 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0904 20:35:22.907620  716742 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 20:35:22.907687  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0904 20:35:22.925747  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:35:22.930451  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:35:22.934382  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0904 20:35:22.934456  716742 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0904 20:35:22.971971  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0904 20:35:22.972050  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0904 20:35:22.989798  716742 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:35:22.989897  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0904 20:35:22.992576  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:35:22.994880  716742 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0904 20:35:22.994953  716742 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0904 20:35:23.022131  716742 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0904 20:35:23.022204  716742 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0904 20:35:23.038813  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:35:23.070590  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0904 20:35:23.070665  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0904 20:35:23.070896  716742 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 20:35:23.070951  716742 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0904 20:35:23.095955  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:35:23.124642  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 20:35:23.186119  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0904 20:35:23.186188  716742 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0904 20:35:23.224395  716742 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0904 20:35:23.224470  716742 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0904 20:35:23.228123  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:35:23.241258  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0904 20:35:23.241328  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0904 20:35:23.258884  716742 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0904 20:35:23.258919  716742 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0904 20:35:23.288949  716742 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:35:23.288988  716742 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 20:35:23.332003  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:35:23.332068  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0904 20:35:23.378638  716742 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0904 20:35:23.378709  716742 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0904 20:35:23.434112  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:35:23.459281  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0904 20:35:23.459356  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0904 20:35:23.472898  716742 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0904 20:35:23.472962  716742 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0904 20:35:23.482949  716742 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0904 20:35:23.483014  716742 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0904 20:35:23.532337  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:35:23.556346  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0904 20:35:23.556420  716742 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0904 20:35:23.626276  716742 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0904 20:35:23.626351  716742 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0904 20:35:23.629728  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0904 20:35:23.629797  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0904 20:35:23.681098  716742 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:35:23.681173  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0904 20:35:23.724178  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0904 20:35:23.724253  716742 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0904 20:35:23.735934  716742 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0904 20:35:23.736008  716742 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0904 20:35:23.757739  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:35:23.792226  716742 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0904 20:35:23.792306  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0904 20:35:23.835787  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0904 20:35:23.835863  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0904 20:35:23.900406  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0904 20:35:23.962593  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0904 20:35:23.962673  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0904 20:35:24.071188  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:35:24.071310  716742 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0904 20:35:24.264803  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:35:26.048113  716742 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.748363933s)
	I0904 20:35:26.048143  716742 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0904 20:35:26.048457  716742 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.504382184s)
	I0904 20:35:26.050163  716742 node_ready.go:35] waiting up to 6m0s for node "addons-057989" to be "Ready" ...
	I0904 20:35:26.688497  716742 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-057989" context rescaled to 1 replicas
	I0904 20:35:27.060768  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.203246964s)
	I0904 20:35:27.060885  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.135070848s)
	I0904 20:35:28.027815  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.035163919s)
	I0904 20:35:28.027966  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.097438105s)
	I0904 20:35:28.028181  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.989297484s)
	I0904 20:35:28.075309  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:29.030465  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.934428715s)
	I0904 20:35:29.030503  716742 addons.go:475] Verifying addon ingress=true in "addons-057989"
	I0904 20:35:29.030752  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.906040067s)
	I0904 20:35:29.031105  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.802909693s)
	I0904 20:35:29.031127  716742 addons.go:475] Verifying addon registry=true in "addons-057989"
	I0904 20:35:29.031230  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.498818017s)
	I0904 20:35:29.031251  716742 addons.go:475] Verifying addon metrics-server=true in "addons-057989"
	I0904 20:35:29.031166  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.596978997s)
	I0904 20:35:29.033224  716742 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-057989 service yakd-dashboard -n yakd-dashboard
	
	I0904 20:35:29.033237  716742 out.go:177] * Verifying ingress addon...
	I0904 20:35:29.033252  716742 out.go:177] * Verifying registry addon...
	I0904 20:35:29.037301  716742 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0904 20:35:29.038325  716742 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0904 20:35:29.097659  716742 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 20:35:29.097684  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:29.098201  716742 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 20:35:29.098256  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:29.195841  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.438010547s)
	W0904 20:35:29.195882  716742 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:35:29.195933  716742 retry.go:31] will retry after 325.249505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:35:29.196029  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.295502009s)
	I0904 20:35:29.437031  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.172116276s)
	I0904 20:35:29.437112  716742 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-057989"
	I0904 20:35:29.440223  716742 out.go:177] * Verifying csi-hostpath-driver addon...
	I0904 20:35:29.443551  716742 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0904 20:35:29.448779  716742 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 20:35:29.448848  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:29.521627  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:35:29.578076  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:29.580777  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:29.958657  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:30.072439  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:30.090922  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:30.097632  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:30.448445  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:30.549658  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:30.550969  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:30.807018  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.285307799s)
	I0904 20:35:30.955804  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:31.046561  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:31.047254  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:31.448608  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:31.549691  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:31.550127  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:31.947929  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:32.048692  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:32.051282  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:32.448193  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:32.549459  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:32.552498  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:32.555459  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:32.731032  716742 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0904 20:35:32.731194  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:32.758991  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:32.884535  716742 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0904 20:35:32.939016  716742 addons.go:234] Setting addon gcp-auth=true in "addons-057989"
	I0904 20:35:32.939069  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:32.939529  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:32.953253  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:32.961330  716742 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0904 20:35:32.961383  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:32.995512  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:33.054398  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:33.055434  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:33.128086  716742 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0904 20:35:33.129944  716742 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 20:35:33.131593  716742 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0904 20:35:33.131723  716742 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0904 20:35:33.165445  716742 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0904 20:35:33.165474  716742 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0904 20:35:33.188075  716742 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:35:33.188102  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0904 20:35:33.209359  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:35:33.448116  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:33.543925  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:33.544466  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:33.820050  716742 addons.go:475] Verifying addon gcp-auth=true in "addons-057989"
	I0904 20:35:33.822068  716742 out.go:177] * Verifying gcp-auth addon...
	I0904 20:35:33.824433  716742 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0904 20:35:33.848245  716742 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0904 20:35:33.848266  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:33.947852  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:34.042923  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:34.043825  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:34.328229  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:34.447947  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:34.540311  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:34.542876  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:34.828284  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:34.947534  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:35.042636  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:35.049332  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:35.055122  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:35.330404  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:35.447274  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:35.545340  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:35.546029  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:35.828501  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:35.947228  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:36.063530  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:36.064339  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:36.328724  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:36.448339  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:36.540949  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:36.542310  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:36.827715  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:36.947278  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:37.043049  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:37.043853  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:37.327744  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:37.447836  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:37.540336  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:37.542627  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:37.554000  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:37.828004  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:37.947413  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:38.040942  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:38.043212  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:38.328167  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:38.448210  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:38.540971  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:38.542858  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:38.828183  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:38.948377  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:39.040766  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:39.043219  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:39.327632  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:39.447836  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:39.540398  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:39.542110  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:39.827478  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:39.946894  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:40.070189  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:40.070554  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:40.076822  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:40.328253  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:40.447552  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:40.541741  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:40.542536  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:40.827594  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:40.947071  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:41.040958  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:41.042547  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:41.328122  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:41.447352  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:41.542033  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:41.542923  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:41.828247  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:41.947505  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:42.042016  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:42.042953  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:42.328660  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:42.449670  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:42.541520  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:42.542951  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:42.554073  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:42.828426  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:42.946914  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:43.041666  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:43.043506  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:43.327678  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:43.447222  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:43.540542  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:43.542481  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:43.827880  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:43.947351  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:44.041905  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:44.042784  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:44.327910  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:44.447338  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:44.540692  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:44.541928  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:44.554128  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:44.827401  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:44.947350  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:45.041914  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:45.046397  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:45.329483  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:45.447350  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:45.541340  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:45.542595  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:45.829160  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:45.947522  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:46.043569  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:46.043935  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:46.327999  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:46.446905  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:46.540678  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:46.542173  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:46.554289  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:46.828320  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:46.947897  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:47.041203  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:47.043417  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:47.327978  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:47.447928  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:47.541679  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:47.542621  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:47.827779  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:47.947381  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:48.042174  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:48.043408  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:48.327968  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:48.447259  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:48.540821  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:48.543687  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:48.829248  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:48.947876  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:49.040634  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:49.042550  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:49.053837  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:49.328536  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:49.446861  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:49.542397  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:49.542833  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:49.828476  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:49.947595  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:50.041512  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:50.045789  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:50.327937  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:50.447163  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:50.541487  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:50.542220  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:50.827853  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:50.947721  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:51.046096  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:51.047230  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:51.054679  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:51.328388  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:51.447569  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:51.541997  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:51.543417  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:51.827469  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:51.947631  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:52.041212  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:52.042374  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:52.328608  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:52.447910  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:52.540711  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:52.542265  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:52.827999  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:52.947770  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:53.041301  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:53.044146  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:53.328745  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:53.448068  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:53.541574  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:53.542484  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:53.553703  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:53.828116  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:53.948990  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:54.053696  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:54.054631  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:54.328538  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:54.447218  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:54.540711  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:54.542591  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:54.828305  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:54.948206  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:55.053401  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:55.058261  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:55.328289  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:55.447828  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:55.542014  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:55.543655  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:55.553772  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:55.827440  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:55.953905  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:56.041609  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:56.042737  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:56.327941  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:56.447334  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:56.541509  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:56.542497  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:56.828523  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:56.948026  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:57.040845  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:57.043079  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:57.329593  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:57.447143  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:57.542303  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:57.543506  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:57.553805  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:57.827884  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:57.946797  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:58.042695  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:58.043278  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:58.328009  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:58.447726  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:58.545605  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:58.548895  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:58.828520  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:58.947520  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:59.041341  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:59.042204  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:59.327308  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:59.447465  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:59.542482  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:59.542723  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:59.827941  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:59.946996  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:00.043176  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:00.104534  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:00.105059  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:36:00.340965  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:00.449041  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:00.551366  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:00.579002  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:00.828953  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:00.947554  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:01.040695  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:01.042741  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:01.328717  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:01.447395  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:01.540951  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:01.543418  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:01.828763  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:01.947431  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:02.046929  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:02.047190  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:02.327715  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:02.447709  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:02.541086  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:02.542605  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:02.553881  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:36:02.828768  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:02.947541  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:03.042304  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:03.043125  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:03.328588  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:03.448032  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:03.541497  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:03.544175  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:03.828676  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:03.947371  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:04.044486  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:04.045078  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:04.327521  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:04.447671  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:04.541369  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:04.542203  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:04.828334  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:04.946895  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:05.045224  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:05.055073  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:05.062604  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:36:05.327935  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:05.447801  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:05.542368  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:05.542768  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:05.827526  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:05.947604  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:06.048078  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:06.049923  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:06.330381  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:06.447521  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:06.542229  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:06.542596  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:06.828109  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:06.947118  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:07.040788  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:07.042886  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:07.328337  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:07.447866  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:07.541481  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:07.542481  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:07.553750  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:36:07.827705  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:07.948008  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:08.042417  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:08.042859  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:08.328090  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:08.447525  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:08.540604  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:08.543433  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:08.837623  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:09.030014  716742 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 20:36:09.030047  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:09.071445  716742 node_ready.go:49] node "addons-057989" has status "Ready":"True"
	I0904 20:36:09.071472  716742 node_ready.go:38] duration metric: took 43.021269395s for node "addons-057989" to be "Ready" ...
	I0904 20:36:09.071484  716742 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 20:36:09.089105  716742 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 20:36:09.089125  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:09.090847  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:09.095775  716742 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-k9k5f" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:09.350645  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:09.533278  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:09.574617  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:09.575871  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:09.828155  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:09.978224  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:10.129591  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:10.131040  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:10.143649  716742 pod_ready.go:93] pod "coredns-6f6b679f8f-k9k5f" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.143680  716742 pod_ready.go:82] duration metric: took 1.047863266s for pod "coredns-6f6b679f8f-k9k5f" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.143707  716742 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.159088  716742 pod_ready.go:93] pod "etcd-addons-057989" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.159117  716742 pod_ready.go:82] duration metric: took 15.402507ms for pod "etcd-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.159133  716742 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.171488  716742 pod_ready.go:93] pod "kube-apiserver-addons-057989" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.171518  716742 pod_ready.go:82] duration metric: took 12.375537ms for pod "kube-apiserver-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.171532  716742 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.178648  716742 pod_ready.go:93] pod "kube-controller-manager-addons-057989" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.178676  716742 pod_ready.go:82] duration metric: took 7.13601ms for pod "kube-controller-manager-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.178691  716742 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nc7jl" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.254762  716742 pod_ready.go:93] pod "kube-proxy-nc7jl" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.254796  716742 pod_ready.go:82] duration metric: took 76.096913ms for pod "kube-proxy-nc7jl" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.254811  716742 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.328843  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:10.449765  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:10.544678  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:10.545672  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:10.654072  716742 pod_ready.go:93] pod "kube-scheduler-addons-057989" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.654096  716742 pod_ready.go:82] duration metric: took 399.277222ms for pod "kube-scheduler-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.654108  716742 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.829645  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:10.950366  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:11.050101  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:11.050780  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:11.328571  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:11.449457  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:11.542566  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:11.546248  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:11.830499  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:11.950704  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:12.054205  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:12.055128  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:12.327897  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:12.449726  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:12.543917  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:12.544796  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:12.661369  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:12.827882  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:12.949012  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:13.043831  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:13.046158  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:13.329536  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:13.450676  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:13.545035  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:13.545415  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:13.830985  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:13.948606  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:14.042504  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:14.048344  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:14.328120  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:14.450281  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:14.542904  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:14.544350  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:14.829872  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:14.951552  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:15.047571  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:15.048435  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:15.169903  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:15.329914  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:15.449286  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:15.548925  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:15.549916  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:15.828156  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:15.948710  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:16.055199  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:16.057193  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:16.328604  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:16.448365  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:16.542507  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:16.543480  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:16.828454  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:16.949666  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:17.043797  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:17.044826  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:17.329042  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:17.448956  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:17.550653  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:17.552483  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:17.662444  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:17.828887  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:17.949344  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:18.076252  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:18.077448  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:18.329262  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:18.450325  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:18.542144  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:18.544796  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:18.829083  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:18.949802  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:19.044416  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:19.045190  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:19.328890  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:19.449574  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:19.544186  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:19.544394  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:19.835752  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:19.949187  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:20.048527  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:20.049968  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:20.178776  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:20.328791  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:20.449953  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:20.555512  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:20.556916  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:20.831683  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:20.948574  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:21.044140  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:21.048581  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:21.329130  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:21.450562  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:21.549963  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:21.550903  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:21.829631  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:21.949015  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:22.046374  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:22.047617  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:22.328800  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:22.449023  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:22.542511  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:22.544223  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:22.660408  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:22.828655  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:22.949624  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:23.044461  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:23.046219  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:23.328978  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:23.448751  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:23.545036  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:23.546547  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:23.828770  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:23.949131  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:24.044659  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:24.044828  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:24.328713  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:24.448992  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:24.543975  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:24.544525  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:24.828665  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:24.948789  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:25.044058  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:25.045416  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:25.177094  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:25.329520  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:25.448758  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:25.544315  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:25.546862  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:25.829047  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:25.949309  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:26.042483  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:26.042805  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:26.327729  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:26.448903  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:26.546711  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:26.551153  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:26.829442  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:26.949733  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:27.046187  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:27.046298  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:27.328645  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:27.450636  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:27.546923  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:27.548930  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:27.661472  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:27.831563  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:27.951278  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:28.048305  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:28.050473  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:28.327740  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:28.448212  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:28.541352  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:28.544411  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:28.829661  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:28.949150  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:29.044402  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:29.045775  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:29.329086  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:29.460355  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:29.544885  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:29.547005  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:29.661692  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:29.829876  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:29.949946  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:30.054365  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:30.068184  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:30.329170  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:30.450074  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:30.544607  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:30.545699  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:30.828795  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:30.951635  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:31.043816  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:31.045286  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:31.329490  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:31.449348  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:31.543204  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:31.544300  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:31.661927  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:31.828620  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:31.950251  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:32.046442  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:32.048445  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:32.330831  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:32.449125  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:32.542854  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:32.543884  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:32.828558  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:32.948725  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:33.042434  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:33.043885  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:33.330491  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:33.448583  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:33.542414  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:33.542780  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:33.828458  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:33.949029  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:34.055376  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:34.056938  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:34.167167  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:34.329045  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:34.453047  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:34.543030  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:34.546398  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:34.829227  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:34.949551  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:35.050127  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:35.053256  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:35.329695  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:35.448676  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:35.542169  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:35.544984  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:35.828200  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:35.948869  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:36.044814  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:36.057927  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:36.335920  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:36.450101  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:36.541680  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:36.543120  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:36.662952  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:36.828448  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:36.948348  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:37.066077  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:37.066409  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:37.328407  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:37.448523  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:37.542789  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:37.543300  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:37.828403  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:37.949498  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:38.043389  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:38.046624  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:38.328771  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:38.448599  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:38.542461  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:38.544745  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:38.828115  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:38.952286  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:39.042378  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:39.044673  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:39.174566  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:39.334517  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:39.453611  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:39.543024  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:39.544086  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:39.828352  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:39.950126  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:40.056483  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:40.061766  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:40.328992  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:40.461098  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:40.544321  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:40.546350  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:40.828978  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:40.948665  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:41.051314  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:41.058138  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:41.328917  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:41.449551  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:41.550313  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:41.551220  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:41.664986  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:41.828514  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:41.952465  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:42.058001  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:42.059479  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:42.331960  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:42.454692  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:42.546320  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:42.547473  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:42.829791  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:42.950245  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:43.044498  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:43.046219  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:43.328405  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:43.454381  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:43.543453  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:43.544110  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:43.828245  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:43.948674  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:44.042409  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:44.043849  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:44.162551  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:44.328009  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:44.448599  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:44.543132  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:44.545358  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:44.828792  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:44.948747  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:45.087640  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:45.088284  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:45.344816  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:45.449644  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:45.544359  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:45.545990  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:45.829341  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:45.949813  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:46.058111  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:46.058856  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:46.170340  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:46.328474  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:46.449570  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:46.544097  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:46.545673  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:46.829642  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:46.949053  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:47.044822  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:47.046478  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:47.328302  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:47.449782  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:47.546336  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:47.551098  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:47.831960  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:47.949636  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:48.060368  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:48.060851  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:48.329136  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:48.449568  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:48.544057  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:48.544859  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:48.663951  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:48.829109  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:48.949424  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:49.044052  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:49.045593  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:49.333734  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:49.450504  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:49.545632  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:49.547608  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:49.828093  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:49.950097  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:50.069427  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:50.086326  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:50.331658  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:50.449560  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:50.542436  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:50.547014  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:50.832752  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:50.952128  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:51.047186  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:51.050572  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:51.163570  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:51.328417  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:51.449441  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:51.544087  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:51.544380  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:51.829033  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:51.949061  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:52.045024  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:52.045918  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:52.328926  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:52.448578  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:52.541879  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:52.542126  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:52.830078  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:52.963921  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:53.041818  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:53.044116  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:53.170106  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:53.328609  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:53.448820  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:53.544307  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:53.545486  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:53.829016  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:53.948618  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:54.041664  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:54.044249  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:54.328767  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:54.449124  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:54.541717  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:54.543353  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:54.828607  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:54.948470  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:55.051321  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:55.054553  716742 kapi.go:107] duration metric: took 1m26.017250397s to wait for kubernetes.io/minikube-addons=registry ...
	I0904 20:36:55.328521  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:55.448501  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:55.543065  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:55.660302  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:55.827945  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:55.951015  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:56.045384  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:56.332183  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:56.449054  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:56.546373  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:56.830492  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:56.949915  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:57.044356  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:57.328294  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:57.449445  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:57.544569  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:57.674148  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:57.830780  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:57.950151  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:58.046701  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:58.331040  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:58.448712  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:58.544968  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:58.828987  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:58.948346  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:59.042906  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:59.332382  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:59.449048  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:59.543704  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:59.832339  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:59.948917  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:00.044377  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:00.222015  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:00.329414  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:00.487456  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:00.546702  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:00.828580  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:00.950943  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:01.043973  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:01.330576  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:01.448174  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:01.542648  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:01.860366  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:01.965465  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:02.052101  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:02.333323  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:02.449358  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:02.543801  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:02.669696  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:02.838776  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:02.951038  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:03.049985  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:03.327773  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:03.448941  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:03.542817  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:03.828613  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:03.948689  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:04.046396  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:04.328336  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:04.449088  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:04.543775  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:04.828884  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:04.952546  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:05.068506  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:05.207353  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:05.328427  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:05.448931  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:05.543510  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:05.828526  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:05.951448  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:06.047048  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:06.328134  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:06.449585  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:06.543072  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:06.829018  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:06.950732  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:07.044010  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:07.328664  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:07.448981  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:07.542915  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:07.661203  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:07.828185  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:07.953699  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:08.043242  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:08.328648  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:08.448488  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:08.543420  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:08.827853  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:08.956859  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:09.045880  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:09.335624  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:09.450095  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:09.543668  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:09.835572  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:09.950374  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:10.053252  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:10.166375  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:10.328364  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:10.449574  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:10.543407  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:10.828502  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:10.949028  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:11.042952  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:11.329417  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:11.453866  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:11.544063  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:11.829066  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:11.950959  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:12.047688  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:12.178429  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:12.337289  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:12.449041  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:12.543497  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:12.829095  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:12.951547  716742 kapi.go:107] duration metric: took 1m43.507997526s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0904 20:37:13.048935  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:13.335226  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:13.543922  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:13.828005  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:14.043281  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:14.328415  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:14.543294  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:14.660776  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:14.828685  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:15.072923  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:15.330103  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:15.542664  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:15.829204  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:16.058275  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:16.328850  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:16.542954  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:16.828673  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:17.042581  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:17.163249  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:17.327534  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:17.543624  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:17.827957  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:18.045331  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:18.329952  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:18.543428  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:18.829780  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:19.043830  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:19.168137  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:19.328673  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:19.544966  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:19.828409  716742 kapi.go:107] duration metric: took 1m46.003970775s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0904 20:37:19.830433  716742 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-057989 cluster.
	I0904 20:37:19.832052  716742 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0904 20:37:19.833771  716742 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0904 20:37:20.045906  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:20.556992  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:21.043726  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:21.173518  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:21.543224  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:22.045682  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:22.543970  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:23.045347  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:23.174483  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:23.543944  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:24.045448  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:24.544024  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:25.074393  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:25.543997  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:25.661982  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:26.066374  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:26.543191  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:27.043090  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:27.542940  716742 kapi.go:107] duration metric: took 1m58.504613312s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0904 20:37:27.545069  716742 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0904 20:37:27.546958  716742 addons.go:510] duration metric: took 2m5.639519055s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner storage-provisioner-rancher metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0904 20:37:28.163426  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:30.166696  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:32.661223  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:35.164273  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:37.660899  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:40.164467  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:42.166438  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:44.661375  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:46.662396  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:48.662560  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:49.660114  716742 pod_ready.go:93] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"True"
	I0904 20:37:49.660145  716742 pod_ready.go:82] duration metric: took 1m39.006028182s for pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace to be "Ready" ...
	I0904 20:37:49.660158  716742 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-hxn5k" in "kube-system" namespace to be "Ready" ...
	I0904 20:37:49.666071  716742 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-hxn5k" in "kube-system" namespace has status "Ready":"True"
	I0904 20:37:49.666099  716742 pod_ready.go:82] duration metric: took 5.93149ms for pod "nvidia-device-plugin-daemonset-hxn5k" in "kube-system" namespace to be "Ready" ...
	I0904 20:37:49.666121  716742 pod_ready.go:39] duration metric: took 1m40.594604615s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 20:37:49.666138  716742 api_server.go:52] waiting for apiserver process to appear ...
	I0904 20:37:49.666166  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0904 20:37:49.666227  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0904 20:37:49.723728  716742 cri.go:89] found id: "8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:37:49.723759  716742 cri.go:89] found id: ""
	I0904 20:37:49.723767  716742 logs.go:276] 1 containers: [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59]
	I0904 20:37:49.723827  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.727548  716742 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0904 20:37:49.727628  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0904 20:37:49.775692  716742 cri.go:89] found id: "4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:37:49.775716  716742 cri.go:89] found id: ""
	I0904 20:37:49.775725  716742 logs.go:276] 1 containers: [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971]
	I0904 20:37:49.775781  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.779581  716742 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0904 20:37:49.779678  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0904 20:37:49.819669  716742 cri.go:89] found id: "2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:37:49.819693  716742 cri.go:89] found id: ""
	I0904 20:37:49.819702  716742 logs.go:276] 1 containers: [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b]
	I0904 20:37:49.819758  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.823267  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0904 20:37:49.823362  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0904 20:37:49.862094  716742 cri.go:89] found id: "d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:37:49.862116  716742 cri.go:89] found id: ""
	I0904 20:37:49.862124  716742 logs.go:276] 1 containers: [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608]
	I0904 20:37:49.862225  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.865865  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0904 20:37:49.865988  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0904 20:37:49.907687  716742 cri.go:89] found id: "13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:37:49.907711  716742 cri.go:89] found id: ""
	I0904 20:37:49.907720  716742 logs.go:276] 1 containers: [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5]
	I0904 20:37:49.907804  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.911524  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0904 20:37:49.911619  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0904 20:37:49.963560  716742 cri.go:89] found id: "7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:37:49.963588  716742 cri.go:89] found id: ""
	I0904 20:37:49.963595  716742 logs.go:276] 1 containers: [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743]
	I0904 20:37:49.963722  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.967436  716742 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0904 20:37:49.967512  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0904 20:37:50.027766  716742 cri.go:89] found id: "508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:37:50.027790  716742 cri.go:89] found id: ""
	I0904 20:37:50.027799  716742 logs.go:276] 1 containers: [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706]
	I0904 20:37:50.027863  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:50.049546  716742 logs.go:123] Gathering logs for dmesg ...
	I0904 20:37:50.049571  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 20:37:50.105332  716742 logs.go:123] Gathering logs for coredns [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b] ...
	I0904 20:37:50.105413  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:37:50.221432  716742 logs.go:123] Gathering logs for kube-scheduler [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608] ...
	I0904 20:37:50.221473  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:37:50.276905  716742 logs.go:123] Gathering logs for CRI-O ...
	I0904 20:37:50.276941  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0904 20:37:50.382777  716742 logs.go:123] Gathering logs for kube-controller-manager [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743] ...
	I0904 20:37:50.382817  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:37:50.476194  716742 logs.go:123] Gathering logs for kindnet [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706] ...
	I0904 20:37:50.476232  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:37:50.522576  716742 logs.go:123] Gathering logs for container status ...
	I0904 20:37:50.522612  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 20:37:50.578692  716742 logs.go:123] Gathering logs for kubelet ...
	I0904 20:37:50.578725  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0904 20:37:50.609335  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:37:50.609580  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:37:50.671992  716742 logs.go:123] Gathering logs for describe nodes ...
	I0904 20:37:50.672029  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 20:37:50.867431  716742 logs.go:123] Gathering logs for kube-apiserver [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59] ...
	I0904 20:37:50.867459  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:37:50.932411  716742 logs.go:123] Gathering logs for etcd [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971] ...
	I0904 20:37:50.932448  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:37:51.003529  716742 logs.go:123] Gathering logs for kube-proxy [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5] ...
	I0904 20:37:51.003585  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:37:51.053994  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:37:51.054031  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0904 20:37:51.054160  716742 out.go:270] X Problems detected in kubelet:
	W0904 20:37:51.054202  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:37:51.054228  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:37:51.054237  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:37:51.054276  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:38:01.054577  716742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 20:38:01.068412  716742 api_server.go:72] duration metric: took 2m39.161430852s to wait for apiserver process to appear ...
	I0904 20:38:01.068486  716742 api_server.go:88] waiting for apiserver healthz status ...
	I0904 20:38:01.068530  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0904 20:38:01.068610  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0904 20:38:01.106967  716742 cri.go:89] found id: "8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:38:01.106991  716742 cri.go:89] found id: ""
	I0904 20:38:01.106998  716742 logs.go:276] 1 containers: [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59]
	I0904 20:38:01.107057  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.110991  716742 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0904 20:38:01.111071  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0904 20:38:01.160285  716742 cri.go:89] found id: "4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:38:01.160309  716742 cri.go:89] found id: ""
	I0904 20:38:01.160316  716742 logs.go:276] 1 containers: [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971]
	I0904 20:38:01.160377  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.164548  716742 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0904 20:38:01.164621  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0904 20:38:01.214500  716742 cri.go:89] found id: "2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:38:01.214527  716742 cri.go:89] found id: ""
	I0904 20:38:01.214536  716742 logs.go:276] 1 containers: [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b]
	I0904 20:38:01.214599  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.218732  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0904 20:38:01.218808  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0904 20:38:01.261426  716742 cri.go:89] found id: "d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:38:01.261458  716742 cri.go:89] found id: ""
	I0904 20:38:01.261468  716742 logs.go:276] 1 containers: [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608]
	I0904 20:38:01.261535  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.265381  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0904 20:38:01.265456  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0904 20:38:01.304546  716742 cri.go:89] found id: "13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:38:01.304570  716742 cri.go:89] found id: ""
	I0904 20:38:01.304578  716742 logs.go:276] 1 containers: [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5]
	I0904 20:38:01.304635  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.308267  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0904 20:38:01.308344  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0904 20:38:01.348771  716742 cri.go:89] found id: "7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:38:01.348801  716742 cri.go:89] found id: ""
	I0904 20:38:01.348811  716742 logs.go:276] 1 containers: [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743]
	I0904 20:38:01.348873  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.353679  716742 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0904 20:38:01.353756  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0904 20:38:01.395097  716742 cri.go:89] found id: "508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:38:01.395119  716742 cri.go:89] found id: ""
	I0904 20:38:01.395127  716742 logs.go:276] 1 containers: [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706]
	I0904 20:38:01.395200  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.399164  716742 logs.go:123] Gathering logs for CRI-O ...
	I0904 20:38:01.399196  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0904 20:38:01.498148  716742 logs.go:123] Gathering logs for container status ...
	I0904 20:38:01.498186  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 20:38:01.560705  716742 logs.go:123] Gathering logs for kubelet ...
	I0904 20:38:01.560738  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0904 20:38:01.587961  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:38:01.588204  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:38:01.651595  716742 logs.go:123] Gathering logs for kube-apiserver [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59] ...
	I0904 20:38:01.651629  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:38:01.724603  716742 logs.go:123] Gathering logs for kube-scheduler [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608] ...
	I0904 20:38:01.724637  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:38:01.785389  716742 logs.go:123] Gathering logs for kindnet [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706] ...
	I0904 20:38:01.785426  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:38:01.826786  716742 logs.go:123] Gathering logs for kube-proxy [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5] ...
	I0904 20:38:01.826821  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:38:01.866479  716742 logs.go:123] Gathering logs for kube-controller-manager [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743] ...
	I0904 20:38:01.866509  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:38:01.938042  716742 logs.go:123] Gathering logs for dmesg ...
	I0904 20:38:01.938147  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 20:38:01.964182  716742 logs.go:123] Gathering logs for describe nodes ...
	I0904 20:38:01.964208  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 20:38:02.154346  716742 logs.go:123] Gathering logs for etcd [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971] ...
	I0904 20:38:02.154476  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:38:02.217734  716742 logs.go:123] Gathering logs for coredns [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b] ...
	I0904 20:38:02.217780  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:38:02.287714  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:38:02.287744  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0904 20:38:02.287830  716742 out.go:270] X Problems detected in kubelet:
	W0904 20:38:02.287844  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:38:02.287878  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:38:02.287888  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:38:02.287901  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:38:12.289049  716742 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 20:38:12.297611  716742 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0904 20:38:12.298713  716742 api_server.go:141] control plane version: v1.31.0
	I0904 20:38:12.298744  716742 api_server.go:131] duration metric: took 11.230244619s to wait for apiserver health ...
	I0904 20:38:12.298754  716742 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 20:38:12.298777  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0904 20:38:12.298845  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0904 20:38:12.341281  716742 cri.go:89] found id: "8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:38:12.341303  716742 cri.go:89] found id: ""
	I0904 20:38:12.341311  716742 logs.go:276] 1 containers: [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59]
	I0904 20:38:12.341369  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.345210  716742 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0904 20:38:12.345295  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0904 20:38:12.384841  716742 cri.go:89] found id: "4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:38:12.384863  716742 cri.go:89] found id: ""
	I0904 20:38:12.384871  716742 logs.go:276] 1 containers: [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971]
	I0904 20:38:12.384934  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.388824  716742 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0904 20:38:12.388897  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0904 20:38:12.432322  716742 cri.go:89] found id: "2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:38:12.432344  716742 cri.go:89] found id: ""
	I0904 20:38:12.432352  716742 logs.go:276] 1 containers: [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b]
	I0904 20:38:12.432410  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.436102  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0904 20:38:12.436180  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0904 20:38:12.474996  716742 cri.go:89] found id: "d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:38:12.475018  716742 cri.go:89] found id: ""
	I0904 20:38:12.475025  716742 logs.go:276] 1 containers: [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608]
	I0904 20:38:12.475087  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.478648  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0904 20:38:12.478726  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0904 20:38:12.522943  716742 cri.go:89] found id: "13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:38:12.523008  716742 cri.go:89] found id: ""
	I0904 20:38:12.523022  716742 logs.go:276] 1 containers: [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5]
	I0904 20:38:12.523085  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.526855  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0904 20:38:12.526930  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0904 20:38:12.577119  716742 cri.go:89] found id: "7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:38:12.577153  716742 cri.go:89] found id: ""
	I0904 20:38:12.577190  716742 logs.go:276] 1 containers: [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743]
	I0904 20:38:12.577249  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.580701  716742 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0904 20:38:12.580774  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0904 20:38:12.624944  716742 cri.go:89] found id: "508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:38:12.624967  716742 cri.go:89] found id: ""
	I0904 20:38:12.624975  716742 logs.go:276] 1 containers: [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706]
	I0904 20:38:12.625035  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.628574  716742 logs.go:123] Gathering logs for kindnet [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706] ...
	I0904 20:38:12.628599  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:38:12.672932  716742 logs.go:123] Gathering logs for dmesg ...
	I0904 20:38:12.672968  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 20:38:12.691130  716742 logs.go:123] Gathering logs for etcd [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971] ...
	I0904 20:38:12.691159  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:38:12.746973  716742 logs.go:123] Gathering logs for kube-scheduler [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608] ...
	I0904 20:38:12.747054  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:38:12.807676  716742 logs.go:123] Gathering logs for kube-proxy [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5] ...
	I0904 20:38:12.807724  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:38:12.855232  716742 logs.go:123] Gathering logs for kube-controller-manager [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743] ...
	I0904 20:38:12.855264  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:38:12.929481  716742 logs.go:123] Gathering logs for CRI-O ...
	I0904 20:38:12.929521  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0904 20:38:13.034293  716742 logs.go:123] Gathering logs for container status ...
	I0904 20:38:13.034341  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 20:38:13.089651  716742 logs.go:123] Gathering logs for kubelet ...
	I0904 20:38:13.089682  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0904 20:38:13.118436  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:38:13.118683  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:38:13.184420  716742 logs.go:123] Gathering logs for describe nodes ...
	I0904 20:38:13.184459  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 20:38:13.328501  716742 logs.go:123] Gathering logs for kube-apiserver [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59] ...
	I0904 20:38:13.328532  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:38:13.382614  716742 logs.go:123] Gathering logs for coredns [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b] ...
	I0904 20:38:13.382650  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:38:13.450831  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:38:13.450864  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0904 20:38:13.450946  716742 out.go:270] X Problems detected in kubelet:
	W0904 20:38:13.450959  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:38:13.450989  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:38:13.450998  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:38:13.451010  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:38:23.464733  716742 system_pods.go:59] 18 kube-system pods found
	I0904 20:38:23.464785  716742 system_pods.go:61] "coredns-6f6b679f8f-k9k5f" [275ab65d-8cdd-4e33-9a30-8e2dea82c08e] Running
	I0904 20:38:23.464791  716742 system_pods.go:61] "csi-hostpath-attacher-0" [415f2771-f4e0-4711-90b4-bbb3cd155351] Running
	I0904 20:38:23.464798  716742 system_pods.go:61] "csi-hostpath-resizer-0" [fcf10418-cc7b-4979-851d-4f6623df5536] Running
	I0904 20:38:23.464803  716742 system_pods.go:61] "csi-hostpathplugin-mn9qp" [0f3278f5-fc14-4f5d-a426-c25a64816e1c] Running
	I0904 20:38:23.464835  716742 system_pods.go:61] "etcd-addons-057989" [e11680b5-b6b4-44d1-bd13-f62d154e2a01] Running
	I0904 20:38:23.464846  716742 system_pods.go:61] "kindnet-xh95z" [0ad1e90a-ac7c-4bde-a26d-ff3f11c0f743] Running
	I0904 20:38:23.464851  716742 system_pods.go:61] "kube-apiserver-addons-057989" [5aea7959-e9f7-4ddd-8bd2-bac55b04b0c8] Running
	I0904 20:38:23.464856  716742 system_pods.go:61] "kube-controller-manager-addons-057989" [f9bdc1cc-474c-40cf-b9a6-04857fd1dcaf] Running
	I0904 20:38:23.464861  716742 system_pods.go:61] "kube-ingress-dns-minikube" [66349fc9-7ad4-480d-b82b-7fb460b850a2] Running
	I0904 20:38:23.464875  716742 system_pods.go:61] "kube-proxy-nc7jl" [43662cab-76d9-4759-9d5b-6f8c245fa417] Running
	I0904 20:38:23.464880  716742 system_pods.go:61] "kube-scheduler-addons-057989" [66094b5b-4131-480b-aff3-4f9187b9afa4] Running
	I0904 20:38:23.464885  716742 system_pods.go:61] "metrics-server-84c5f94fbc-fq2ps" [42462678-f110-4415-b2f1-367217f8c8a2] Running
	I0904 20:38:23.464903  716742 system_pods.go:61] "nvidia-device-plugin-daemonset-hxn5k" [e2ce6825-b8bf-4d5a-a77f-337ca9cd2e60] Running
	I0904 20:38:23.464907  716742 system_pods.go:61] "registry-6fb4cdfc84-q2v5x" [08b3698e-ab89-4393-846c-c4d5984ebe9e] Running
	I0904 20:38:23.464911  716742 system_pods.go:61] "registry-proxy-xfn95" [19eda952-0370-4c89-ad9f-fa2fcf34e855] Running
	I0904 20:38:23.464915  716742 system_pods.go:61] "snapshot-controller-56fcc65765-2nr7v" [e1ed8e39-dd7b-4cfb-bf3e-3ba5331286b1] Running
	I0904 20:38:23.464922  716742 system_pods.go:61] "snapshot-controller-56fcc65765-tcz8s" [16aa5513-c8b9-4e3b-9c63-2b9d9c64ef30] Running
	I0904 20:38:23.464927  716742 system_pods.go:61] "storage-provisioner" [12d1bdba-0302-4966-8175-e7542a9ae817] Running
	I0904 20:38:23.464937  716742 system_pods.go:74] duration metric: took 11.166175842s to wait for pod list to return data ...
	I0904 20:38:23.464949  716742 default_sa.go:34] waiting for default service account to be created ...
	I0904 20:38:23.467768  716742 default_sa.go:45] found service account: "default"
	I0904 20:38:23.467802  716742 default_sa.go:55] duration metric: took 2.843632ms for default service account to be created ...
	I0904 20:38:23.467813  716742 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 20:38:23.479242  716742 system_pods.go:86] 18 kube-system pods found
	I0904 20:38:23.479286  716742 system_pods.go:89] "coredns-6f6b679f8f-k9k5f" [275ab65d-8cdd-4e33-9a30-8e2dea82c08e] Running
	I0904 20:38:23.479295  716742 system_pods.go:89] "csi-hostpath-attacher-0" [415f2771-f4e0-4711-90b4-bbb3cd155351] Running
	I0904 20:38:23.479301  716742 system_pods.go:89] "csi-hostpath-resizer-0" [fcf10418-cc7b-4979-851d-4f6623df5536] Running
	I0904 20:38:23.479306  716742 system_pods.go:89] "csi-hostpathplugin-mn9qp" [0f3278f5-fc14-4f5d-a426-c25a64816e1c] Running
	I0904 20:38:23.479311  716742 system_pods.go:89] "etcd-addons-057989" [e11680b5-b6b4-44d1-bd13-f62d154e2a01] Running
	I0904 20:38:23.479317  716742 system_pods.go:89] "kindnet-xh95z" [0ad1e90a-ac7c-4bde-a26d-ff3f11c0f743] Running
	I0904 20:38:23.479321  716742 system_pods.go:89] "kube-apiserver-addons-057989" [5aea7959-e9f7-4ddd-8bd2-bac55b04b0c8] Running
	I0904 20:38:23.479332  716742 system_pods.go:89] "kube-controller-manager-addons-057989" [f9bdc1cc-474c-40cf-b9a6-04857fd1dcaf] Running
	I0904 20:38:23.479337  716742 system_pods.go:89] "kube-ingress-dns-minikube" [66349fc9-7ad4-480d-b82b-7fb460b850a2] Running
	I0904 20:38:23.479348  716742 system_pods.go:89] "kube-proxy-nc7jl" [43662cab-76d9-4759-9d5b-6f8c245fa417] Running
	I0904 20:38:23.479353  716742 system_pods.go:89] "kube-scheduler-addons-057989" [66094b5b-4131-480b-aff3-4f9187b9afa4] Running
	I0904 20:38:23.479359  716742 system_pods.go:89] "metrics-server-84c5f94fbc-fq2ps" [42462678-f110-4415-b2f1-367217f8c8a2] Running
	I0904 20:38:23.479367  716742 system_pods.go:89] "nvidia-device-plugin-daemonset-hxn5k" [e2ce6825-b8bf-4d5a-a77f-337ca9cd2e60] Running
	I0904 20:38:23.479371  716742 system_pods.go:89] "registry-6fb4cdfc84-q2v5x" [08b3698e-ab89-4393-846c-c4d5984ebe9e] Running
	I0904 20:38:23.479375  716742 system_pods.go:89] "registry-proxy-xfn95" [19eda952-0370-4c89-ad9f-fa2fcf34e855] Running
	I0904 20:38:23.479384  716742 system_pods.go:89] "snapshot-controller-56fcc65765-2nr7v" [e1ed8e39-dd7b-4cfb-bf3e-3ba5331286b1] Running
	I0904 20:38:23.479388  716742 system_pods.go:89] "snapshot-controller-56fcc65765-tcz8s" [16aa5513-c8b9-4e3b-9c63-2b9d9c64ef30] Running
	I0904 20:38:23.479392  716742 system_pods.go:89] "storage-provisioner" [12d1bdba-0302-4966-8175-e7542a9ae817] Running
	I0904 20:38:23.479403  716742 system_pods.go:126] duration metric: took 11.582438ms to wait for k8s-apps to be running ...
	I0904 20:38:23.479411  716742 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 20:38:23.479471  716742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 20:38:23.492157  716742 system_svc.go:56] duration metric: took 12.73694ms WaitForService to wait for kubelet
	I0904 20:38:23.492198  716742 kubeadm.go:582] duration metric: took 3m1.585223376s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:38:23.492221  716742 node_conditions.go:102] verifying NodePressure condition ...
	I0904 20:38:23.495727  716742 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0904 20:38:23.495758  716742 node_conditions.go:123] node cpu capacity is 2
	I0904 20:38:23.495769  716742 node_conditions.go:105] duration metric: took 3.542898ms to run NodePressure ...
	I0904 20:38:23.495782  716742 start.go:241] waiting for startup goroutines ...
	I0904 20:38:23.495790  716742 start.go:246] waiting for cluster config update ...
	I0904 20:38:23.495806  716742 start.go:255] writing updated cluster config ...
	I0904 20:38:23.496108  716742 ssh_runner.go:195] Run: rm -f paused
	I0904 20:38:23.838873  716742 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0904 20:38:23.842549  716742 out.go:177] * Done! kubectl is now configured to use "addons-057989" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 20:47:30 addons-057989 crio[964]: time="2024-09-04 20:47:30.676945635Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=af924bf4-cf0a-4668-8bcd-57a85a94deb8 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 20:47:37 addons-057989 crio[964]: time="2024-09-04 20:47:37.921589821Z" level=info msg="Stopping pod sandbox: a5fe8dfc3bda5721f8506bd5d8b410f6d592c9175f845bdddc527fb4fa9c0a65" id=67cfca91-cc35-43a5-a450-b679f94ce674 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:47:37 addons-057989 crio[964]: time="2024-09-04 20:47:37.930145518Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:a5fe8dfc3bda5721f8506bd5d8b410f6d592c9175f845bdddc527fb4fa9c0a65 UID:6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3 NetNS:/var/run/netns/1991c06f-b378-4261-8f69-9c7c7a6ae572 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 04 20:47:37 addons-057989 crio[964]: time="2024-09-04 20:47:37.930341140Z" level=info msg="Deleting pod default_registry-test from CNI network \"kindnet\" (type=ptp)"
	Sep 04 20:47:37 addons-057989 crio[964]: time="2024-09-04 20:47:37.980291631Z" level=info msg="Stopped pod sandbox: a5fe8dfc3bda5721f8506bd5d8b410f6d592c9175f845bdddc527fb4fa9c0a65" id=67cfca91-cc35-43a5-a450-b679f94ce674 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.669498788Z" level=info msg="Stopping container: 5d1ef32c01d81f9f2e031294425ab923f58bf3121c3fd136a4cda56782b4ab09 (timeout: 30s)" id=a2fec8d5-b759-4f80-a194-6fcf15ef45ae name=/runtime.v1.RuntimeService/StopContainer
	Sep 04 20:47:38 addons-057989 conmon[4170]: conmon 5d1ef32c01d81f9f2e03 <ninfo>: container 4181 exited with status 2
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.700740906Z" level=info msg="Stopping container: 83cafff68f43b6928784d3ca550cc336a8a2d0d50c9d72b7feecb9c9e2ddda82 (timeout: 30s)" id=eda389af-664a-456d-94c6-622c81081384 name=/runtime.v1.RuntimeService/StopContainer
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.821324683Z" level=info msg="Stopped container 5d1ef32c01d81f9f2e031294425ab923f58bf3121c3fd136a4cda56782b4ab09: kube-system/registry-6fb4cdfc84-q2v5x/registry" id=a2fec8d5-b759-4f80-a194-6fcf15ef45ae name=/runtime.v1.RuntimeService/StopContainer
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.821822789Z" level=info msg="Stopping pod sandbox: abd33d900a8709f76e06f48226e119ce68a12ea4757c26dc8f14ace23a7884de" id=af263875-4f3a-4bf1-88bb-996e8dccad96 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.822067305Z" level=info msg="Got pod network &{Name:registry-6fb4cdfc84-q2v5x Namespace:kube-system ID:abd33d900a8709f76e06f48226e119ce68a12ea4757c26dc8f14ace23a7884de UID:08b3698e-ab89-4393-846c-c4d5984ebe9e NetNS:/var/run/netns/66d7ffa0-533d-429c-9fb0-91ea3753ee5c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.822202531Z" level=info msg="Deleting pod kube-system_registry-6fb4cdfc84-q2v5x from CNI network \"kindnet\" (type=ptp)"
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.863598884Z" level=info msg="Stopped pod sandbox: abd33d900a8709f76e06f48226e119ce68a12ea4757c26dc8f14ace23a7884de" id=af263875-4f3a-4bf1-88bb-996e8dccad96 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.866551813Z" level=info msg="Stopped container 83cafff68f43b6928784d3ca550cc336a8a2d0d50c9d72b7feecb9c9e2ddda82: kube-system/registry-proxy-xfn95/registry-proxy" id=eda389af-664a-456d-94c6-622c81081384 name=/runtime.v1.RuntimeService/StopContainer
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.866964325Z" level=info msg="Stopping pod sandbox: abecba70eabf4357a8b63643987c6144fcec0ad8b89ea2bf8cc99c98c5c91e70" id=b5157c40-d24a-4cd8-a7a0-926f27ca9d6c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.880040122Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-HRWN7DATE42MH3BR - [0:0]\n:KUBE-HP-UWM53ZB7VDINHNPY - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-YWROQ6AKHNADW62A - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vqkrh_ingress-nginx_acf9942a-b013-45ff-8421-2e697ba3f39b_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-HRWN7DATE42MH3BR\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vqkrh_ingress-nginx_acf9942a-b013-45ff-8421-2e697ba3f39b_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-YWROQ6AKHNADW62A\n-A KUBE-HP-HRWN7DATE42MH3BR -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vqkrh_ingress-nginx_acf9942a-b013-45ff-8421-2e697ba3f39b_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-HRWN7DATE42MH3BR -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vqkrh_ingress-nginx_acf9942a-b013-45ff-84
21-2e697ba3f39b_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.19:443\n-A KUBE-HP-YWROQ6AKHNADW62A -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vqkrh_ingress-nginx_acf9942a-b013-45ff-8421-2e697ba3f39b_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-YWROQ6AKHNADW62A -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vqkrh_ingress-nginx_acf9942a-b013-45ff-8421-2e697ba3f39b_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.19:80\n-X KUBE-HP-UWM53ZB7VDINHNPY\nCOMMIT\n"
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.892284650Z" level=info msg="Closing host port tcp:5000"
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.896883288Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.897100235Z" level=info msg="Got pod network &{Name:registry-proxy-xfn95 Namespace:kube-system ID:abecba70eabf4357a8b63643987c6144fcec0ad8b89ea2bf8cc99c98c5c91e70 UID:19eda952-0370-4c89-ad9f-fa2fcf34e855 NetNS:/var/run/netns/67f0ff0b-797d-454e-be72-122061da4d31 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.897255308Z" level=info msg="Deleting pod kube-system_registry-proxy-xfn95 from CNI network \"kindnet\" (type=ptp)"
	Sep 04 20:47:38 addons-057989 crio[964]: time="2024-09-04 20:47:38.925727426Z" level=info msg="Stopped pod sandbox: abecba70eabf4357a8b63643987c6144fcec0ad8b89ea2bf8cc99c98c5c91e70" id=b5157c40-d24a-4cd8-a7a0-926f27ca9d6c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:47:39 addons-057989 crio[964]: time="2024-09-04 20:47:39.733760614Z" level=info msg="Removing container: 83cafff68f43b6928784d3ca550cc336a8a2d0d50c9d72b7feecb9c9e2ddda82" id=4f070854-7a93-4668-9627-e77c0b507236 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 04 20:47:39 addons-057989 crio[964]: time="2024-09-04 20:47:39.758112804Z" level=info msg="Removed container 83cafff68f43b6928784d3ca550cc336a8a2d0d50c9d72b7feecb9c9e2ddda82: kube-system/registry-proxy-xfn95/registry-proxy" id=4f070854-7a93-4668-9627-e77c0b507236 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 04 20:47:39 addons-057989 crio[964]: time="2024-09-04 20:47:39.763339276Z" level=info msg="Removing container: 5d1ef32c01d81f9f2e031294425ab923f58bf3121c3fd136a4cda56782b4ab09" id=b75da503-ca52-467c-860a-f48d30615f98 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 04 20:47:39 addons-057989 crio[964]: time="2024-09-04 20:47:39.790846639Z" level=info msg="Removed container 5d1ef32c01d81f9f2e031294425ab923f58bf3121c3fd136a4cda56782b4ab09: kube-system/registry-6fb4cdfc84-q2v5x/registry" id=b75da503-ca52-467c-860a-f48d30615f98 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	62c6500a32fdf       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            26 seconds ago      Exited              gadget                     7                   2b8d1e26abe6d       gadget-nm44m
	45aa268b85b7a       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             10 minutes ago      Running             controller                 0                   3db751846b614       ingress-nginx-controller-bc57996ff-vqkrh
	17ccab4a15b48       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 10 minutes ago      Running             gcp-auth                   0                   c1f5e9bf12177       gcp-auth-89d5ffd79-cxk4z
	bf713f4f5efd9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   10 minutes ago      Exited              patch                      0                   335cdd6d925a5       ingress-nginx-admission-patch-kgzlw
	e7757d7bb94ee       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   10 minutes ago      Exited              create                     0                   57688671382bb       ingress-nginx-admission-create-6vqzd
	f2a85a34e5358       gcr.io/cloud-spanner-emulator/emulator@sha256:41ec188288c7943f488600462b2b74002814e52439be82d15de33c3ee4898a58               10 minutes ago      Running             cloud-spanner-emulator     0                   fbaf7f46fd8ef       cloud-spanner-emulator-769b77f747-l4dt7
	461dc54fabae7       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             10 minutes ago      Running             local-path-provisioner     0                   ea483f0b84bff       local-path-provisioner-86d989889c-kw8pv
	e4e68a0857dfd       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             10 minutes ago      Running             minikube-ingress-dns       0                   f3644b533c86c       kube-ingress-dns-minikube
	b6829e8e31f07       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        11 minutes ago      Running             metrics-server             0                   0eacdbbb6c587       metrics-server-84c5f94fbc-fq2ps
	31f827593e3fc       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                     11 minutes ago      Running             nvidia-device-plugin-ctr   0                   159cecd95b4b7       nvidia-device-plugin-daemonset-hxn5k
	73fc2fc333315       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              11 minutes ago      Running             yakd                       0                   0dd5920b12b1a       yakd-dashboard-67d98fc6b-7j8ss
	1020fa8b2d129       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             11 minutes ago      Running             storage-provisioner        0                   13409a452e461       storage-provisioner
	2da0c2547a33e       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             11 minutes ago      Running             coredns                    0                   bb705bdc5c322       coredns-6f6b679f8f-k9k5f
	508bb2db26ab2       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                           12 minutes ago      Running             kindnet-cni                0                   7d8b3855e8eb9       kindnet-xh95z
	13931a0aa1133       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                             12 minutes ago      Running             kube-proxy                 0                   f81a1c946ebc8       kube-proxy-nc7jl
	8926a3a460f5f       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                             12 minutes ago      Running             kube-apiserver             0                   092c186577491       kube-apiserver-addons-057989
	4b86be5e13ac3       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             12 minutes ago      Running             etcd                       0                   ee74187eabd31       etcd-addons-057989
	d659a50021dfa       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                             12 minutes ago      Running             kube-scheduler             0                   5bafee131ac20       kube-scheduler-addons-057989
	7276ded69a4bd       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                             12 minutes ago      Running             kube-controller-manager    0                   fa8a52afc7812       kube-controller-manager-addons-057989
	
	
	==> coredns [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b] <==
	[INFO] 10.244.0.4:43230 - 36266 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080696s
	[INFO] 10.244.0.4:54625 - 20507 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002721352s
	[INFO] 10.244.0.4:54625 - 17636 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002899991s
	[INFO] 10.244.0.4:47752 - 7177 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069036s
	[INFO] 10.244.0.4:47752 - 24629 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101462s
	[INFO] 10.244.0.4:45701 - 37302 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000101938s
	[INFO] 10.244.0.4:45701 - 44725 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000049492s
	[INFO] 10.244.0.4:41451 - 17255 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000054743s
	[INFO] 10.244.0.4:41451 - 52577 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004708s
	[INFO] 10.244.0.4:58781 - 44362 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045061s
	[INFO] 10.244.0.4:58781 - 5196 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000714s
	[INFO] 10.244.0.4:33457 - 46149 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001594296s
	[INFO] 10.244.0.4:33457 - 859 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001866158s
	[INFO] 10.244.0.4:53802 - 33736 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000056253s
	[INFO] 10.244.0.4:53802 - 30774 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077364s
	[INFO] 10.244.0.20:37769 - 193 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000226473s
	[INFO] 10.244.0.20:34803 - 4301 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000135128s
	[INFO] 10.244.0.20:58659 - 7520 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152554s
	[INFO] 10.244.0.20:36650 - 49243 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000080572s
	[INFO] 10.244.0.20:38727 - 6956 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000194687s
	[INFO] 10.244.0.20:48234 - 885 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104121s
	[INFO] 10.244.0.20:43383 - 57780 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002078142s
	[INFO] 10.244.0.20:59508 - 59382 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002994471s
	[INFO] 10.244.0.20:37033 - 24816 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002191148s
	[INFO] 10.244.0.20:33558 - 37651 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.0021591s
	
	
	==> describe nodes <==
	Name:               addons-057989
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-057989
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af
	                    minikube.k8s.io/name=addons-057989
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_04T20_35_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-057989
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Sep 2024 20:35:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-057989
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Sep 2024 20:47:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Sep 2024 20:46:50 +0000   Wed, 04 Sep 2024 20:35:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Sep 2024 20:46:50 +0000   Wed, 04 Sep 2024 20:35:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Sep 2024 20:46:50 +0000   Wed, 04 Sep 2024 20:35:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Sep 2024 20:46:50 +0000   Wed, 04 Sep 2024 20:36:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-057989
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 21d608e1e5814ff9b34c3cb1cfdf5bda
	  System UUID:                19e6588e-4dc5-4438-9acf-c7fa25e5848f
	  Boot ID:                    02fc5889-82d8-42f6-b649-9c13bcf74bdb
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-769b77f747-l4dt7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-nm44m                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-cxk4z                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vqkrh    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-k9k5f                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-057989                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-xh95z                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-addons-057989                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-057989       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-nc7jl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-057989                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-fq2ps             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-hxn5k        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-kw8pv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-7j8ss              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-057989 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-057989 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-057989 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-057989 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-057989 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-057989 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-057989 event: Registered Node addons-057989 in Controller
	  Normal   NodeReady                11m                kubelet          Node addons-057989 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 4 20:07] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[Sep 4 20:31] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971] <==
	{"level":"info","ts":"2024-09-04T20:35:23.724712Z","caller":"traceutil/trace.go:171","msg":"trace[445317914] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:379; }","duration":"178.387084ms","start":"2024-09-04T20:35:23.546305Z","end":"2024-09-04T20:35:23.724692Z","steps":["trace[445317914] 'agreement among raft nodes before linearized reading'  (duration: 104.43113ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-04T20:35:23.726088Z","caller":"traceutil/trace.go:171","msg":"trace[1975069153] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:379; }","duration":"246.442661ms","start":"2024-09-04T20:35:23.479620Z","end":"2024-09-04T20:35:23.726063Z","steps":["trace[1975069153] 'agreement among raft nodes before linearized reading'  (duration: 170.811026ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-04T20:35:25.146033Z","caller":"traceutil/trace.go:171","msg":"trace[80645460] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"138.165345ms","start":"2024-09-04T20:35:25.007846Z","end":"2024-09-04T20:35:25.146012Z","steps":["trace[80645460] 'process raft request'  (duration: 62.840469ms)","trace[80645460] 'compare'  (duration: 74.237794ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:25.581489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.959743ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.687455Z","caller":"traceutil/trace.go:171","msg":"trace[1976798533] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:388; }","duration":"253.908912ms","start":"2024-09-04T20:35:25.433511Z","end":"2024-09-04T20:35:25.687420Z","steps":["trace[1976798533] 'range keys from in-memory index tree'  (duration: 147.878228ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-04T20:35:25.691719Z","caller":"traceutil/trace.go:171","msg":"trace[369956328] linearizableReadLoop","detail":"{readStateIndex:398; appliedIndex:398; }","duration":"110.347393ms","start":"2024-09-04T20:35:25.581351Z","end":"2024-09-04T20:35:25.691698Z","steps":["trace[369956328] 'read index received'  (duration: 110.342412ms)","trace[369956328] 'applied index is now lower than readState.Index'  (duration: 3.881µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:25.694290Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.921597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.707065Z","caller":"traceutil/trace.go:171","msg":"trace[1332958654] range","detail":"{range_begin:/registry/clusterroles/minikube-ingress-dns; range_end:; response_count:0; response_revision:388; }","duration":"125.695984ms","start":"2024-09-04T20:35:25.581345Z","end":"2024-09-04T20:35:25.707041Z","steps":["trace[1332958654] 'agreement among raft nodes before linearized reading'  (duration: 112.43803ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T20:35:25.884853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.044659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.885401Z","caller":"traceutil/trace.go:171","msg":"trace[1259821602] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:0; response_revision:397; }","duration":"130.606ms","start":"2024-09-04T20:35:25.754776Z","end":"2024-09-04T20:35:25.885382Z","steps":["trace[1259821602] 'agreement among raft nodes before linearized reading'  (duration: 53.794706ms)","trace[1259821602] 'range keys from in-memory index tree'  (duration: 76.238145ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:25.885753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.982311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.888311Z","caller":"traceutil/trace.go:171","msg":"trace[778996107] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:0; response_revision:397; }","duration":"133.536021ms","start":"2024-09-04T20:35:25.754756Z","end":"2024-09-04T20:35:25.888292Z","steps":["trace[778996107] 'agreement among raft nodes before linearized reading'  (duration: 53.824646ms)","trace[778996107] 'range keys from in-memory index tree'  (duration: 77.148805ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:25.885790Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.04526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.895290Z","caller":"traceutil/trace.go:171","msg":"trace[850430809] range","detail":"{range_begin:/registry/services/specs/kube-system/registry; range_end:; response_count:0; response_revision:397; }","duration":"140.533663ms","start":"2024-09-04T20:35:25.754733Z","end":"2024-09-04T20:35:25.895267Z","steps":["trace[850430809] 'agreement among raft nodes before linearized reading'  (duration: 53.853651ms)","trace[850430809] 'range keys from in-memory index tree'  (duration: 77.186547ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:26.954061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.583009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3455"}
	{"level":"info","ts":"2024-09-04T20:35:26.954130Z","caller":"traceutil/trace.go:171","msg":"trace[1772008450] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:444; }","duration":"167.663204ms","start":"2024-09-04T20:35:26.786452Z","end":"2024-09-04T20:35:26.954115Z","steps":["trace[1772008450] 'agreement among raft nodes before linearized reading'  (duration: 111.077805ms)","trace[1772008450] 'range keys from in-memory index tree'  (duration: 56.418272ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:26.961722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.697616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/yakd-dashboard/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:26.961805Z","caller":"traceutil/trace.go:171","msg":"trace[2102871173] range","detail":"{range_begin:/registry/serviceaccounts/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:444; }","duration":"175.792769ms","start":"2024-09-04T20:35:26.785998Z","end":"2024-09-04T20:35:26.961790Z","steps":["trace[2102871173] 'agreement among raft nodes before linearized reading'  (duration: 111.129365ms)","trace[2102871173] 'range keys from in-memory index tree'  (duration: 64.517372ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:26.962247Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.843828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4096"}
	{"level":"info","ts":"2024-09-04T20:35:26.962291Z","caller":"traceutil/trace.go:171","msg":"trace[2095094196] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:444; }","duration":"175.943485ms","start":"2024-09-04T20:35:26.786339Z","end":"2024-09-04T20:35:26.962282Z","steps":["trace[2095094196] 'agreement among raft nodes before linearized reading'  (duration: 111.201643ms)","trace[2095094196] 'range keys from in-memory index tree'  (duration: 64.583528ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:26.962505Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.470957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/gadget/gadget\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:26.962550Z","caller":"traceutil/trace.go:171","msg":"trace[353857431] range","detail":"{range_begin:/registry/serviceaccounts/gadget/gadget; range_end:; response_count:0; response_revision:444; }","duration":"176.513795ms","start":"2024-09-04T20:35:26.786026Z","end":"2024-09-04T20:35:26.962539Z","steps":["trace[353857431] 'agreement among raft nodes before linearized reading'  (duration: 111.527379ms)","trace[353857431] 'range keys from in-memory index tree'  (duration: 64.934536ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-04T20:45:12.338154Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1560}
	{"level":"info","ts":"2024-09-04T20:45:12.368220Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1560,"took":"29.612622ms","hash":3263472688,"current-db-size-bytes":6590464,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3416064,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-04T20:45:12.368273Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3263472688,"revision":1560,"compact-revision":-1}
	
	
	==> gcp-auth [17ccab4a15b4833c23b0926aecd04a59266538dd48181e9afa4051fa2ef4c952] <==
	2024/09/04 20:37:19 GCP Auth Webhook started!
	2024/09/04 20:38:23 Ready to marshal response ...
	2024/09/04 20:38:23 Ready to write response ...
	2024/09/04 20:38:24 Ready to marshal response ...
	2024/09/04 20:38:24 Ready to write response ...
	2024/09/04 20:38:24 Ready to marshal response ...
	2024/09/04 20:38:24 Ready to write response ...
	2024/09/04 20:46:32 Ready to marshal response ...
	2024/09/04 20:46:32 Ready to write response ...
	2024/09/04 20:46:37 Ready to marshal response ...
	2024/09/04 20:46:37 Ready to write response ...
	2024/09/04 20:47:04 Ready to marshal response ...
	2024/09/04 20:47:04 Ready to write response ...
	
	
	==> kernel <==
	 20:47:40 up  4:30,  0 users,  load average: 0.91, 0.54, 1.35
	Linux addons-057989 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706] <==
	I0904 20:45:38.290895       1 main.go:299] handling current node
	I0904 20:45:48.286493       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:45:48.286550       1 main.go:299] handling current node
	I0904 20:45:58.288344       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:45:58.288381       1 main.go:299] handling current node
	I0904 20:46:08.293983       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:46:08.294017       1 main.go:299] handling current node
	I0904 20:46:18.288231       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:46:18.288265       1 main.go:299] handling current node
	I0904 20:46:28.287381       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:46:28.287413       1 main.go:299] handling current node
	I0904 20:46:38.286971       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:46:38.287003       1 main.go:299] handling current node
	I0904 20:46:48.286678       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:46:48.286719       1 main.go:299] handling current node
	I0904 20:46:58.286663       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:46:58.286707       1 main.go:299] handling current node
	I0904 20:47:08.287097       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:47:08.287241       1 main.go:299] handling current node
	I0904 20:47:18.286906       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:47:18.286938       1 main.go:299] handling current node
	I0904 20:47:28.287402       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:47:28.287526       1 main.go:299] handling current node
	I0904 20:47:38.287132       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:47:38.287168       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59] <==
	I0904 20:36:29.275713       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 20:36:29.275766       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 20:37:49.386991       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 20:37:49.387073       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0904 20:37:49.387183       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.223.191:443: connect: connection refused" logger="UnhandledError"
	E0904 20:37:49.390494       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.223.191:443: connect: connection refused" logger="UnhandledError"
	E0904 20:37:49.395595       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.223.191:443: connect: connection refused" logger="UnhandledError"
	I0904 20:37:49.490854       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0904 20:46:44.207537       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0904 20:47:20.716830       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.716996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:47:20.747162       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.747299       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:47:20.757218       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.757301       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:47:20.799045       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.799887       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:47:20.853654       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.853691       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0904 20:47:21.799451       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0904 20:47:21.854331       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0904 20:47:21.948281       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743] <==
	E0904 20:47:21.855858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0904 20:47:21.887283       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0904 20:47:21.887330       1 shared_informer.go:320] Caches are synced for garbage collector
	E0904 20:47:21.950237       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:47:23.019974       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:47:23.020027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:47:23.115773       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:47:23.115818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:47:23.402899       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:47:23.402941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:47:25.015107       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:47:25.015154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:47:25.209951       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:47:25.210069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:47:26.111883       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:47:26.111927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:47:30.010531       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:47:30.010577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:47:30.481814       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:47:30.481878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:47:31.058899       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:47:31.058947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:47:38.189998       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:47:38.190044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0904 20:47:38.651594       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="7.761µs"
	
	
	==> kube-proxy [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5] <==
	I0904 20:35:27.820676       1 server_linux.go:66] "Using iptables proxy"
	I0904 20:35:28.601961       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0904 20:35:28.602048       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 20:35:28.838061       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 20:35:28.838226       1 server_linux.go:169] "Using iptables Proxier"
	I0904 20:35:28.840297       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 20:35:28.840905       1 server.go:483] "Version info" version="v1.31.0"
	I0904 20:35:28.840973       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 20:35:28.843573       1 config.go:197] "Starting service config controller"
	I0904 20:35:28.843687       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0904 20:35:28.881197       1 config.go:104] "Starting endpoint slice config controller"
	I0904 20:35:28.881320       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0904 20:35:28.883028       1 config.go:326] "Starting node config controller"
	I0904 20:35:28.883115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0904 20:35:28.997291       1 shared_informer.go:320] Caches are synced for node config
	I0904 20:35:29.021198       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0904 20:35:29.044954       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608] <==
	W0904 20:35:14.275096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0904 20:35:14.275232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:14.275216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0904 20:35:14.275327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.102647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0904 20:35:15.102790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.146211       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0904 20:35:15.146262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.151331       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0904 20:35:15.151486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.194181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0904 20:35:15.194327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.218850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0904 20:35:15.218973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.252691       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0904 20:35:15.252825       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0904 20:35:15.348686       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0904 20:35:15.348826       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.376639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0904 20:35:15.376765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.392542       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0904 20:35:15.392666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.425197       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0904 20:35:15.425318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0904 20:35:17.567935       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 04 20:47:33 addons-057989 kubelet[1510]: E0904 20:47:33.676012    1510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-nm44m_gadget(d1e7a28e-abcd-4272-8dbe-0fec484e5c83)\"" pod="gadget/gadget-nm44m" podUID="d1e7a28e-abcd-4272-8dbe-0fec484e5c83"
	Sep 04 20:47:37 addons-057989 kubelet[1510]: E0904 20:47:37.051408    1510 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725482857051094157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:498687,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:47:37 addons-057989 kubelet[1510]: E0904 20:47:37.051449    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725482857051094157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:498687,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:47:37 addons-057989 kubelet[1510]: I0904 20:47:37.676166    1510 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-hxn5k" secret="" err="secret \"gcp-auth\" not found"
	Sep 04 20:47:38 addons-057989 kubelet[1510]: I0904 20:47:38.150105    1510 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3-gcp-creds\") pod \"6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3\" (UID: \"6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3\") "
	Sep 04 20:47:38 addons-057989 kubelet[1510]: I0904 20:47:38.150191    1510 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cnnt\" (UniqueName: \"kubernetes.io/projected/6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3-kube-api-access-2cnnt\") pod \"6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3\" (UID: \"6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3\") "
	Sep 04 20:47:38 addons-057989 kubelet[1510]: I0904 20:47:38.151552    1510 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3" (UID: "6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 04 20:47:38 addons-057989 kubelet[1510]: I0904 20:47:38.154268    1510 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3-kube-api-access-2cnnt" (OuterVolumeSpecName: "kube-api-access-2cnnt") pod "6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3" (UID: "6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3"). InnerVolumeSpecName "kube-api-access-2cnnt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 04 20:47:38 addons-057989 kubelet[1510]: I0904 20:47:38.251173    1510 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3-gcp-creds\") on node \"addons-057989\" DevicePath \"\""
	Sep 04 20:47:38 addons-057989 kubelet[1510]: I0904 20:47:38.251230    1510 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2cnnt\" (UniqueName: \"kubernetes.io/projected/6dfa4daa-1c37-4d9e-ad00-34fcfdd74ee3-kube-api-access-2cnnt\") on node \"addons-057989\" DevicePath \"\""
	Sep 04 20:47:38 addons-057989 kubelet[1510]: I0904 20:47:38.958102    1510 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvsvq\" (UniqueName: \"kubernetes.io/projected/08b3698e-ab89-4393-846c-c4d5984ebe9e-kube-api-access-qvsvq\") pod \"08b3698e-ab89-4393-846c-c4d5984ebe9e\" (UID: \"08b3698e-ab89-4393-846c-c4d5984ebe9e\") "
	Sep 04 20:47:38 addons-057989 kubelet[1510]: I0904 20:47:38.962304    1510 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b3698e-ab89-4393-846c-c4d5984ebe9e-kube-api-access-qvsvq" (OuterVolumeSpecName: "kube-api-access-qvsvq") pod "08b3698e-ab89-4393-846c-c4d5984ebe9e" (UID: "08b3698e-ab89-4393-846c-c4d5984ebe9e"). InnerVolumeSpecName "kube-api-access-qvsvq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 04 20:47:39 addons-057989 kubelet[1510]: I0904 20:47:39.058625    1510 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frrwc\" (UniqueName: \"kubernetes.io/projected/19eda952-0370-4c89-ad9f-fa2fcf34e855-kube-api-access-frrwc\") pod \"19eda952-0370-4c89-ad9f-fa2fcf34e855\" (UID: \"19eda952-0370-4c89-ad9f-fa2fcf34e855\") "
	Sep 04 20:47:39 addons-057989 kubelet[1510]: I0904 20:47:39.058739    1510 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qvsvq\" (UniqueName: \"kubernetes.io/projected/08b3698e-ab89-4393-846c-c4d5984ebe9e-kube-api-access-qvsvq\") on node \"addons-057989\" DevicePath \"\""
	Sep 04 20:47:39 addons-057989 kubelet[1510]: I0904 20:47:39.069509    1510 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19eda952-0370-4c89-ad9f-fa2fcf34e855-kube-api-access-frrwc" (OuterVolumeSpecName: "kube-api-access-frrwc") pod "19eda952-0370-4c89-ad9f-fa2fcf34e855" (UID: "19eda952-0370-4c89-ad9f-fa2fcf34e855"). InnerVolumeSpecName "kube-api-access-frrwc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 04 20:47:39 addons-057989 kubelet[1510]: I0904 20:47:39.166208    1510 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-frrwc\" (UniqueName: \"kubernetes.io/projected/19eda952-0370-4c89-ad9f-fa2fcf34e855-kube-api-access-frrwc\") on node \"addons-057989\" DevicePath \"\""
	Sep 04 20:47:39 addons-057989 kubelet[1510]: I0904 20:47:39.728622    1510 scope.go:117] "RemoveContainer" containerID="83cafff68f43b6928784d3ca550cc336a8a2d0d50c9d72b7feecb9c9e2ddda82"
	Sep 04 20:47:39 addons-057989 kubelet[1510]: I0904 20:47:39.760188    1510 scope.go:117] "RemoveContainer" containerID="83cafff68f43b6928784d3ca550cc336a8a2d0d50c9d72b7feecb9c9e2ddda82"
	Sep 04 20:47:39 addons-057989 kubelet[1510]: E0904 20:47:39.760667    1510 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83cafff68f43b6928784d3ca550cc336a8a2d0d50c9d72b7feecb9c9e2ddda82\": container with ID starting with 83cafff68f43b6928784d3ca550cc336a8a2d0d50c9d72b7feecb9c9e2ddda82 not found: ID does not exist" containerID="83cafff68f43b6928784d3ca550cc336a8a2d0d50c9d72b7feecb9c9e2ddda82"
	Sep 04 20:47:39 addons-057989 kubelet[1510]: I0904 20:47:39.760707    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83cafff68f43b6928784d3ca550cc336a8a2d0d50c9d72b7feecb9c9e2ddda82"} err="failed to get container status \"83cafff68f43b6928784d3ca550cc336a8a2d0d50c9d72b7feecb9c9e2ddda82\": rpc error: code = NotFound desc = could not find container \"83cafff68f43b6928784d3ca550cc336a8a2d0d50c9d72b7feecb9c9e2ddda82\": container with ID starting with 83cafff68f43b6928784d3ca550cc336a8a2d0d50c9d72b7feecb9c9e2ddda82 not found: ID does not exist"
	Sep 04 20:47:39 addons-057989 kubelet[1510]: I0904 20:47:39.760734    1510 scope.go:117] "RemoveContainer" containerID="5d1ef32c01d81f9f2e031294425ab923f58bf3121c3fd136a4cda56782b4ab09"
	Sep 04 20:47:39 addons-057989 kubelet[1510]: I0904 20:47:39.791113    1510 scope.go:117] "RemoveContainer" containerID="5d1ef32c01d81f9f2e031294425ab923f58bf3121c3fd136a4cda56782b4ab09"
	Sep 04 20:47:39 addons-057989 kubelet[1510]: E0904 20:47:39.791597    1510 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d1ef32c01d81f9f2e031294425ab923f58bf3121c3fd136a4cda56782b4ab09\": container with ID starting with 5d1ef32c01d81f9f2e031294425ab923f58bf3121c3fd136a4cda56782b4ab09 not found: ID does not exist" containerID="5d1ef32c01d81f9f2e031294425ab923f58bf3121c3fd136a4cda56782b4ab09"
	Sep 04 20:47:39 addons-057989 kubelet[1510]: I0904 20:47:39.791638    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d1ef32c01d81f9f2e031294425ab923f58bf3121c3fd136a4cda56782b4ab09"} err="failed to get container status \"5d1ef32c01d81f9f2e031294425ab923f58bf3121c3fd136a4cda56782b4ab09\": rpc error: code = NotFound desc = could not find container \"5d1ef32c01d81f9f2e031294425ab923f58bf3121c3fd136a4cda56782b4ab09\": container with ID starting with 5d1ef32c01d81f9f2e031294425ab923f58bf3121c3fd136a4cda56782b4ab09 not found: ID does not exist"
	Sep 04 20:47:40 addons-057989 kubelet[1510]: I0904 20:47:40.677263    1510 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b3698e-ab89-4393-846c-c4d5984ebe9e" path="/var/lib/kubelet/pods/08b3698e-ab89-4393-846c-c4d5984ebe9e/volumes"
	
	
	==> storage-provisioner [1020fa8b2d129b2c1528e8263e44e0614430ad1edde0adfc959a0b0cead5e677] <==
	I0904 20:36:09.670277       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 20:36:09.684669       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 20:36:09.684712       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0904 20:36:09.692410       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0904 20:36:09.692859       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-057989_59879aa7-61cd-4c49-a7f4-85b770d0ea1d!
	I0904 20:36:09.694887       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"311a2453-e39f-4619-9aa1-2dcff1946c80", APIVersion:"v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-057989_59879aa7-61cd-4c49-a7f4-85b770d0ea1d became leader
	I0904 20:36:09.793462       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-057989_59879aa7-61cd-4c49-a7f4-85b770d0ea1d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-057989 -n addons-057989
helpers_test.go:261: (dbg) Run:  kubectl --context addons-057989 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-6vqzd ingress-nginx-admission-patch-kgzlw
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-057989 describe pod busybox ingress-nginx-admission-create-6vqzd ingress-nginx-admission-patch-kgzlw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-057989 describe pod busybox ingress-nginx-admission-create-6vqzd ingress-nginx-admission-patch-kgzlw: exit status 1 (98.78094ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-057989/192.168.49.2
	Start Time:       Wed, 04 Sep 2024 20:38:24 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k4dt6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k4dt6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-057989
	  Normal   Pulling    7m46s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m46s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m46s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m16s (x20 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6vqzd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kgzlw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-057989 describe pod busybox ingress-nginx-admission-create-6vqzd ingress-nginx-admission-patch-kgzlw: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.84s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (150.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-057989 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-057989 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-057989 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b47e1639-3024-4aa5-bc90-9a092e3604ab] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b47e1639-3024-4aa5-bc90-9a092e3604ab] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004133649s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-057989 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.54696332s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-057989 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-057989 addons disable ingress --alsologtostderr -v=1: (7.873466113s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-057989
helpers_test.go:235: (dbg) docker inspect addons-057989:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320",
	        "Created": "2024-09-04T20:34:52.925359137Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 717234,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-04T20:34:53.081030632Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8411aacd61cb8f2a7ae48c92e2c9e76ad4dd701b3dba8b30393c5cc31fbd2b15",
	        "ResolvConfPath": "/var/lib/docker/containers/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320/hostname",
	        "HostsPath": "/var/lib/docker/containers/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320/hosts",
	        "LogPath": "/var/lib/docker/containers/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320-json.log",
	        "Name": "/addons-057989",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-057989:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-057989",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cd850bee504de257b138e2fb3f5056955979d35717e7f20016e6d26d978366be-init/diff:/var/lib/docker/overlay2/e164f50a1bfe4541271ed61a6ed23c33b9aae141da805b23620713759476fde0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd850bee504de257b138e2fb3f5056955979d35717e7f20016e6d26d978366be/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd850bee504de257b138e2fb3f5056955979d35717e7f20016e6d26d978366be/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd850bee504de257b138e2fb3f5056955979d35717e7f20016e6d26d978366be/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-057989",
	                "Source": "/var/lib/docker/volumes/addons-057989/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-057989",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-057989",
	                "name.minikube.sigs.k8s.io": "addons-057989",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7817bba0f58403312356fc6c068a6420e3474d973c3bf9f9708656d8c06482b",
	            "SandboxKey": "/var/run/docker/netns/e7817bba0f58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-057989": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3dc03972bd677b6f27e0f7eb6bf3c869f01a326f25eec49d8a8d16973aa42236",
	                    "EndpointID": "3f89f8bd76b0eaeb20e7ece98c8b5534a50c35ccfbd1872e98138f979cab06b1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-057989",
	                        "73a9bf226299"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-057989 -n addons-057989
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-057989 logs -n 25: (1.516815442s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-729266   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | -p download-only-729266              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| delete  | -p download-only-729266              | download-only-729266   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| start   | -o=json --download-only              | download-only-110365   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | -p download-only-110365              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| delete  | -p download-only-110365              | download-only-110365   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| delete  | -p download-only-729266              | download-only-729266   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| delete  | -p download-only-110365              | download-only-110365   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| start   | --download-only -p                   | download-docker-053885 | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | download-docker-053885               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-053885            | download-docker-053885 | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| start   | --download-only -p                   | binary-mirror-435820   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | binary-mirror-435820                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40553               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-435820              | binary-mirror-435820   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| addons  | enable dashboard -p                  | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | addons-057989                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | addons-057989                        |                        |         |         |                     |                     |
	| start   | -p addons-057989 --wait=true         | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:38 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-057989 addons                 | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-057989 addons                 | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-057989 ip                     | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	| addons  | addons-057989 addons disable         | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	|         | addons-057989                        |                        |         |         |                     |                     |
	| ssh     | addons-057989 ssh curl -s            | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:48 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-057989 ip                     | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:50 UTC | 04 Sep 24 20:50 UTC |
	| addons  | addons-057989 addons disable         | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:50 UTC | 04 Sep 24 20:50 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-057989 addons disable         | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:50 UTC | 04 Sep 24 20:50 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 20:34:26
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 20:34:26.364635  716742 out.go:345] Setting OutFile to fd 1 ...
	I0904 20:34:26.364772  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:34:26.364783  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:34:26.364788  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:34:26.365015  716742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
	I0904 20:34:26.365465  716742 out.go:352] Setting JSON to false
	I0904 20:34:26.366331  716742 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15417,"bootTime":1725466650,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0904 20:34:26.366402  716742 start.go:139] virtualization:  
	I0904 20:34:26.368474  716742 out.go:177] * [addons-057989] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0904 20:34:26.370910  716742 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 20:34:26.371038  716742 notify.go:220] Checking for updates...
	I0904 20:34:26.374838  716742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:34:26.376708  716742 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 20:34:26.378539  716742 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	I0904 20:34:26.380267  716742 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0904 20:34:26.382309  716742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 20:34:26.384843  716742 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 20:34:26.407170  716742 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0904 20:34:26.407296  716742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:34:26.468251  716742 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-04 20:34:26.458330655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 20:34:26.468366  716742 docker.go:307] overlay module found
	I0904 20:34:26.470430  716742 out.go:177] * Using the docker driver based on user configuration
	I0904 20:34:26.472360  716742 start.go:297] selected driver: docker
	I0904 20:34:26.472375  716742 start.go:901] validating driver "docker" against <nil>
	I0904 20:34:26.472388  716742 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 20:34:26.473037  716742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:34:26.545537  716742 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-04 20:34:26.534442525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 20:34:26.545707  716742 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 20:34:26.546029  716742 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:34:26.547715  716742 out.go:177] * Using Docker driver with root privileges
	I0904 20:34:26.549443  716742 cni.go:84] Creating CNI manager for ""
	I0904 20:34:26.549486  716742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:34:26.549498  716742 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 20:34:26.549644  716742 start.go:340] cluster config:
	{Name:addons-057989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0904 20:34:26.551789  716742 out.go:177] * Starting "addons-057989" primary control-plane node in "addons-057989" cluster
	I0904 20:34:26.553334  716742 cache.go:121] Beginning downloading kic base image for docker with crio
	I0904 20:34:26.554936  716742 out.go:177] * Pulling base image v0.0.45 ...
	I0904 20:34:26.556687  716742 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 20:34:26.556763  716742 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0904 20:34:26.556783  716742 cache.go:56] Caching tarball of preloaded images
	I0904 20:34:26.556881  716742 preload.go:172] Found /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0904 20:34:26.556903  716742 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0904 20:34:26.557407  716742 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/config.json ...
	I0904 20:34:26.557448  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/config.json: {Name:mk4c159eebe676425fef59d6562583fda185ed7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:26.557673  716742 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0904 20:34:26.576982  716742 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0904 20:34:26.577098  716742 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0904 20:34:26.577129  716742 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0904 20:34:26.577146  716742 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0904 20:34:26.577161  716742 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0904 20:34:26.577168  716742 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from local cache
	I0904 20:34:44.336400  716742 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from cached tarball
	I0904 20:34:44.336436  716742 cache.go:194] Successfully downloaded all kic artifacts
	I0904 20:34:44.336481  716742 start.go:360] acquireMachinesLock for addons-057989: {Name:mk0970b3a3d59ebd1c006a89f39ceb89ec07a595 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 20:34:44.337080  716742 start.go:364] duration metric: took 571.787µs to acquireMachinesLock for "addons-057989"
	I0904 20:34:44.337123  716742 start.go:93] Provisioning new machine with config: &{Name:addons-057989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:34:44.337224  716742 start.go:125] createHost starting for "" (driver="docker")
	I0904 20:34:44.340059  716742 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0904 20:34:44.340312  716742 start.go:159] libmachine.API.Create for "addons-057989" (driver="docker")
	I0904 20:34:44.340356  716742 client.go:168] LocalClient.Create starting
	I0904 20:34:44.340489  716742 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem
	I0904 20:34:45.869727  716742 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem
	I0904 20:34:46.857527  716742 cli_runner.go:164] Run: docker network inspect addons-057989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 20:34:46.873370  716742 cli_runner.go:211] docker network inspect addons-057989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 20:34:46.873460  716742 network_create.go:284] running [docker network inspect addons-057989] to gather additional debugging logs...
	I0904 20:34:46.873484  716742 cli_runner.go:164] Run: docker network inspect addons-057989
	W0904 20:34:46.888882  716742 cli_runner.go:211] docker network inspect addons-057989 returned with exit code 1
	I0904 20:34:46.888915  716742 network_create.go:287] error running [docker network inspect addons-057989]: docker network inspect addons-057989: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-057989 not found
	I0904 20:34:46.888929  716742 network_create.go:289] output of [docker network inspect addons-057989]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-057989 not found
	
	** /stderr **
	I0904 20:34:46.889032  716742 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 20:34:46.906072  716742 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017d4850}
	I0904 20:34:46.906116  716742 network_create.go:124] attempt to create docker network addons-057989 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0904 20:34:46.906182  716742 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-057989 addons-057989
	I0904 20:34:46.976444  716742 network_create.go:108] docker network addons-057989 192.168.49.0/24 created
	I0904 20:34:46.976479  716742 kic.go:121] calculated static IP "192.168.49.2" for the "addons-057989" container
	I0904 20:34:46.976555  716742 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 20:34:46.992316  716742 cli_runner.go:164] Run: docker volume create addons-057989 --label name.minikube.sigs.k8s.io=addons-057989 --label created_by.minikube.sigs.k8s.io=true
	I0904 20:34:47.012952  716742 oci.go:103] Successfully created a docker volume addons-057989
	I0904 20:34:47.013072  716742 cli_runner.go:164] Run: docker run --rm --name addons-057989-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-057989 --entrypoint /usr/bin/test -v addons-057989:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib
	I0904 20:34:48.615772  716742 cli_runner.go:217] Completed: docker run --rm --name addons-057989-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-057989 --entrypoint /usr/bin/test -v addons-057989:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib: (1.602654612s)
	I0904 20:34:48.615806  716742 oci.go:107] Successfully prepared a docker volume addons-057989
	I0904 20:34:48.615827  716742 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 20:34:48.615846  716742 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 20:34:48.615942  716742 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-057989:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir
	I0904 20:34:52.860186  716742 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-057989:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir: (4.244202783s)
	I0904 20:34:52.860217  716742 kic.go:203] duration metric: took 4.244368465s to extract preloaded images to volume ...
	W0904 20:34:52.860378  716742 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 20:34:52.860496  716742 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 20:34:52.910765  716742 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-057989 --name addons-057989 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-057989 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-057989 --network addons-057989 --ip 192.168.49.2 --volume addons-057989:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85
	I0904 20:34:53.252893  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Running}}
	I0904 20:34:53.272312  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:34:53.296422  716742 cli_runner.go:164] Run: docker exec addons-057989 stat /var/lib/dpkg/alternatives/iptables
	I0904 20:34:53.389330  716742 oci.go:144] the created container "addons-057989" has a running status.
	I0904 20:34:53.389362  716742 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa...
	I0904 20:34:54.130907  716742 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 20:34:54.153369  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:34:54.171584  716742 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 20:34:54.171605  716742 kic_runner.go:114] Args: [docker exec --privileged addons-057989 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 20:34:54.258829  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:34:54.279547  716742 machine.go:93] provisionDockerMachine start ...
	I0904 20:34:54.279655  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:54.304920  716742 main.go:141] libmachine: Using SSH client type: native
	I0904 20:34:54.305248  716742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0904 20:34:54.305259  716742 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 20:34:54.430287  716742 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-057989
	
	I0904 20:34:54.430311  716742 ubuntu.go:169] provisioning hostname "addons-057989"
	I0904 20:34:54.430389  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:54.451013  716742 main.go:141] libmachine: Using SSH client type: native
	I0904 20:34:54.451268  716742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0904 20:34:54.451286  716742 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-057989 && echo "addons-057989" | sudo tee /etc/hostname
	I0904 20:34:54.595269  716742 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-057989
	
	I0904 20:34:54.595355  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:54.613079  716742 main.go:141] libmachine: Using SSH client type: native
	I0904 20:34:54.613362  716742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0904 20:34:54.613384  716742 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-057989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-057989/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-057989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 20:34:54.733925  716742 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 20:34:54.734009  716742 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19575-710603/.minikube CaCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19575-710603/.minikube}
	I0904 20:34:54.734036  716742 ubuntu.go:177] setting up certificates
	I0904 20:34:54.734046  716742 provision.go:84] configureAuth start
	I0904 20:34:54.734112  716742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-057989
	I0904 20:34:54.752741  716742 provision.go:143] copyHostCerts
	I0904 20:34:54.752830  716742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem (1082 bytes)
	I0904 20:34:54.752951  716742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem (1123 bytes)
	I0904 20:34:54.753017  716742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem (1675 bytes)
	I0904 20:34:54.753069  716742 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem org=jenkins.addons-057989 san=[127.0.0.1 192.168.49.2 addons-057989 localhost minikube]
	I0904 20:34:55.147333  716742 provision.go:177] copyRemoteCerts
	I0904 20:34:55.147404  716742 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 20:34:55.147447  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.165454  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:55.255682  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 20:34:55.281142  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 20:34:55.305289  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 20:34:55.329497  716742 provision.go:87] duration metric: took 595.436326ms to configureAuth
	I0904 20:34:55.329576  716742 ubuntu.go:193] setting minikube options for container-runtime
	I0904 20:34:55.329784  716742 config.go:182] Loaded profile config "addons-057989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 20:34:55.329932  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.346253  716742 main.go:141] libmachine: Using SSH client type: native
	I0904 20:34:55.346495  716742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0904 20:34:55.346516  716742 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 20:34:55.565686  716742 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 20:34:55.565710  716742 machine.go:96] duration metric: took 1.286141461s to provisionDockerMachine
	I0904 20:34:55.565720  716742 client.go:171] duration metric: took 11.225352854s to LocalClient.Create
	I0904 20:34:55.565732  716742 start.go:167] duration metric: took 11.225421054s to libmachine.API.Create "addons-057989"
	I0904 20:34:55.565740  716742 start.go:293] postStartSetup for "addons-057989" (driver="docker")
	I0904 20:34:55.565751  716742 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 20:34:55.565817  716742 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 20:34:55.565881  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.583171  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:55.671214  716742 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 20:34:55.674581  716742 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 20:34:55.674617  716742 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 20:34:55.674629  716742 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 20:34:55.674636  716742 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0904 20:34:55.674651  716742 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-710603/.minikube/addons for local assets ...
	I0904 20:34:55.674722  716742 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-710603/.minikube/files for local assets ...
	I0904 20:34:55.674750  716742 start.go:296] duration metric: took 109.004783ms for postStartSetup
	I0904 20:34:55.675068  716742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-057989
	I0904 20:34:55.690396  716742 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/config.json ...
	I0904 20:34:55.690692  716742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 20:34:55.690748  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.706620  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:55.790685  716742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 20:34:55.795488  716742 start.go:128] duration metric: took 11.458247104s to createHost
	I0904 20:34:55.795510  716742 start.go:83] releasing machines lock for "addons-057989", held for 11.458409136s
	I0904 20:34:55.795590  716742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-057989
	I0904 20:34:55.811993  716742 ssh_runner.go:195] Run: cat /version.json
	I0904 20:34:55.812023  716742 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 20:34:55.812045  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.812092  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.832455  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:55.839481  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:56.100314  716742 ssh_runner.go:195] Run: systemctl --version
	I0904 20:34:56.104670  716742 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 20:34:56.252641  716742 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 20:34:56.256985  716742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 20:34:56.275558  716742 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 20:34:56.275632  716742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 20:34:56.310401  716742 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 20:34:56.310468  716742 start.go:495] detecting cgroup driver to use...
	I0904 20:34:56.310517  716742 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 20:34:56.310578  716742 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 20:34:56.326154  716742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 20:34:56.337266  716742 docker.go:217] disabling cri-docker service (if available) ...
	I0904 20:34:56.337385  716742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 20:34:56.352198  716742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 20:34:56.367450  716742 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 20:34:56.455787  716742 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 20:34:56.554238  716742 docker.go:233] disabling docker service ...
	I0904 20:34:56.554351  716742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 20:34:56.574710  716742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 20:34:56.587825  716742 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 20:34:56.687299  716742 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 20:34:56.786601  716742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 20:34:56.799474  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 20:34:56.817328  716742 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0904 20:34:56.817397  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.827886  716742 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 20:34:56.828012  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.838976  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.849064  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.859185  716742 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 20:34:56.868303  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.878343  716742 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.894559  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.904955  716742 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 20:34:56.914184  716742 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 20:34:56.924030  716742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:34:57.018394  716742 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 20:34:57.139627  716742 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 20:34:57.139769  716742 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 20:34:57.143458  716742 start.go:563] Will wait 60s for crictl version
	I0904 20:34:57.143551  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:34:57.146967  716742 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 20:34:57.187619  716742 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 20:34:57.187782  716742 ssh_runner.go:195] Run: crio --version
	I0904 20:34:57.230327  716742 ssh_runner.go:195] Run: crio --version
	I0904 20:34:57.274907  716742 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0904 20:34:57.276866  716742 cli_runner.go:164] Run: docker network inspect addons-057989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 20:34:57.292471  716742 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 20:34:57.296202  716742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:34:57.307224  716742 kubeadm.go:883] updating cluster {Name:addons-057989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 20:34:57.307355  716742 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 20:34:57.307428  716742 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:34:57.381955  716742 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 20:34:57.381980  716742 crio.go:433] Images already preloaded, skipping extraction
	I0904 20:34:57.382038  716742 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:34:57.418097  716742 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 20:34:57.418121  716742 cache_images.go:84] Images are preloaded, skipping loading
	I0904 20:34:57.418129  716742 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0904 20:34:57.418229  716742 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-057989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 20:34:57.418319  716742 ssh_runner.go:195] Run: crio config
	I0904 20:34:57.464713  716742 cni.go:84] Creating CNI manager for ""
	I0904 20:34:57.464736  716742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:34:57.464747  716742 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 20:34:57.464800  716742 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-057989 NodeName:addons-057989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 20:34:57.464994  716742 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-057989"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 20:34:57.465097  716742 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0904 20:34:57.474398  716742 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 20:34:57.474488  716742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 20:34:57.483198  716742 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0904 20:34:57.501099  716742 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 20:34:57.519783  716742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0904 20:34:57.538347  716742 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0904 20:34:57.541777  716742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:34:57.552363  716742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:34:57.634363  716742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:34:57.648929  716742 certs.go:68] Setting up /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989 for IP: 192.168.49.2
	I0904 20:34:57.648953  716742 certs.go:194] generating shared ca certs ...
	I0904 20:34:57.648969  716742 certs.go:226] acquiring lock for ca certs: {Name:mkc3a04cbc0797b819dd3c9fec2eaef93961640b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:57.649112  716742 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key
	I0904 20:34:58.017005  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt ...
	I0904 20:34:58.017043  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt: {Name:mkd95f3346a423afb0e8673b5e71292af3b74b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:58.017249  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key ...
	I0904 20:34:58.017258  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key: {Name:mk7f9cfa6bde577b19e8374855b89bb733281fb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:58.017337  716742 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key
	I0904 20:34:58.453146  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt ...
	I0904 20:34:58.453179  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt: {Name:mkc81f7ed4f8bfbc83feffd55dc281d29aeb677f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:58.453378  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key ...
	I0904 20:34:58.453392  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key: {Name:mkbe99405547ce66fa15a0dc370e003355394a7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:58.453479  716742 certs.go:256] generating profile certs ...
	I0904 20:34:58.453540  716742 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.key
	I0904 20:34:58.453557  716742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt with IP's: []
	I0904 20:34:59.380258  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt ...
	I0904 20:34:59.380292  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: {Name:mk7c6d25eef31d0f7545d21b444aedd95ab50fe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:59.380484  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.key ...
	I0904 20:34:59.380497  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.key: {Name:mkb45fcd6f95ce4da37194a6bfd862e0659e59dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:59.380587  716742 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key.2b1b1c52
	I0904 20:34:59.380609  716742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt.2b1b1c52 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0904 20:34:59.599935  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt.2b1b1c52 ...
	I0904 20:34:59.599969  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt.2b1b1c52: {Name:mk9bcac5f1d69cf17a003755e1f54f813baa3753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:59.600669  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key.2b1b1c52 ...
	I0904 20:34:59.600690  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key.2b1b1c52: {Name:mkfc8797a4c6c62540408d4ff8b05ec0fca2be8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:59.601327  716742 certs.go:381] copying /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt.2b1b1c52 -> /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt
	I0904 20:34:59.601450  716742 certs.go:385] copying /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key.2b1b1c52 -> /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key
	I0904 20:34:59.601512  716742 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.key
	I0904 20:34:59.601536  716742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.crt with IP's: []
	I0904 20:35:00.752243  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.crt ...
	I0904 20:35:00.752285  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.crt: {Name:mkb03587981395a01bab503d3182ecbc4b34513d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:35:00.752500  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.key ...
	I0904 20:35:00.752523  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.key: {Name:mk4bede3c921b1b8c749a338fcf99d9201d566d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:35:00.752724  716742 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 20:35:00.752773  716742 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem (1082 bytes)
	I0904 20:35:00.752808  716742 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem (1123 bytes)
	I0904 20:35:00.752835  716742 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem (1675 bytes)
	I0904 20:35:00.753523  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 20:35:00.789146  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 20:35:00.819640  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 20:35:00.849973  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 20:35:00.878887  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 20:35:00.907623  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 20:35:00.937179  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 20:35:00.966650  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 20:35:00.994197  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 20:35:01.076673  716742 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 20:35:01.103416  716742 ssh_runner.go:195] Run: openssl version
	I0904 20:35:01.109985  716742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 20:35:01.121951  716742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:35:01.125896  716742 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:35:01.125966  716742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:35:01.133559  716742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 20:35:01.143886  716742 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 20:35:01.147606  716742 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 20:35:01.147661  716742 kubeadm.go:392] StartCluster: {Name:addons-057989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:35:01.147757  716742 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 20:35:01.147846  716742 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 20:35:01.189885  716742 cri.go:89] found id: ""
	I0904 20:35:01.189957  716742 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 20:35:01.199399  716742 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 20:35:01.209262  716742 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0904 20:35:01.209331  716742 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 20:35:01.219268  716742 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 20:35:01.219297  716742 kubeadm.go:157] found existing configuration files:
	
	I0904 20:35:01.219416  716742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 20:35:01.229900  716742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 20:35:01.230022  716742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 20:35:01.239899  716742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 20:35:01.249771  716742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 20:35:01.249888  716742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 20:35:01.259492  716742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 20:35:01.269530  716742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 20:35:01.269617  716742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 20:35:01.278828  716742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 20:35:01.288498  716742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 20:35:01.288597  716742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 20:35:01.297744  716742 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0904 20:35:01.340416  716742 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0904 20:35:01.340765  716742 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 20:35:01.363185  716742 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0904 20:35:01.363359  716742 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0904 20:35:01.363401  716742 kubeadm.go:310] OS: Linux
	I0904 20:35:01.363457  716742 kubeadm.go:310] CGROUPS_CPU: enabled
	I0904 20:35:01.363510  716742 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0904 20:35:01.363559  716742 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0904 20:35:01.363608  716742 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0904 20:35:01.363657  716742 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0904 20:35:01.363717  716742 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0904 20:35:01.363769  716742 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0904 20:35:01.363825  716742 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0904 20:35:01.363876  716742 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0904 20:35:01.428620  716742 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 20:35:01.428815  716742 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 20:35:01.428964  716742 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 20:35:01.438302  716742 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 20:35:01.440900  716742 out.go:235]   - Generating certificates and keys ...
	I0904 20:35:01.441007  716742 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 20:35:01.441111  716742 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 20:35:02.500413  716742 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 20:35:03.216919  716742 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 20:35:03.705499  716742 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 20:35:04.250336  716742 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 20:35:04.593576  716742 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 20:35:04.593731  716742 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-057989 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 20:35:06.225872  716742 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 20:35:06.226139  716742 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-057989 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 20:35:06.469736  716742 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 20:35:07.150890  716742 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 20:35:07.404958  716742 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 20:35:07.405173  716742 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 20:35:07.560068  716742 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 20:35:07.823478  716742 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 20:35:08.117805  716742 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 20:35:08.681466  716742 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 20:35:08.809396  716742 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 20:35:08.810004  716742 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 20:35:08.813080  716742 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 20:35:08.815884  716742 out.go:235]   - Booting up control plane ...
	I0904 20:35:08.815986  716742 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 20:35:08.816061  716742 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 20:35:08.817199  716742 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 20:35:08.827622  716742 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 20:35:08.833681  716742 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 20:35:08.833734  716742 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 20:35:08.922848  716742 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 20:35:08.922965  716742 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 20:35:09.928124  716742 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005060114s
	I0904 20:35:09.928209  716742 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0904 20:35:15.930014  716742 kubeadm.go:310] [api-check] The API server is healthy after 6.002333677s
	I0904 20:35:15.952027  716742 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 20:35:15.967084  716742 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 20:35:15.992846  716742 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 20:35:15.993056  716742 kubeadm.go:310] [mark-control-plane] Marking the node addons-057989 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 20:35:16.042250  716742 kubeadm.go:310] [bootstrap-token] Using token: mex69v.g494u4t2bbxooj6i
	I0904 20:35:16.044971  716742 out.go:235]   - Configuring RBAC rules ...
	I0904 20:35:16.045134  716742 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 20:35:16.055705  716742 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 20:35:16.065010  716742 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 20:35:16.069238  716742 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 20:35:16.073658  716742 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 20:35:16.078420  716742 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 20:35:16.342354  716742 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 20:35:16.769206  716742 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 20:35:17.338932  716742 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 20:35:17.341063  716742 kubeadm.go:310] 
	I0904 20:35:17.341135  716742 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 20:35:17.341141  716742 kubeadm.go:310] 
	I0904 20:35:17.341215  716742 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 20:35:17.341220  716742 kubeadm.go:310] 
	I0904 20:35:17.341244  716742 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 20:35:17.341301  716742 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 20:35:17.341349  716742 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 20:35:17.341354  716742 kubeadm.go:310] 
	I0904 20:35:17.341406  716742 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 20:35:17.341410  716742 kubeadm.go:310] 
	I0904 20:35:17.341456  716742 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 20:35:17.341460  716742 kubeadm.go:310] 
	I0904 20:35:17.341510  716742 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 20:35:17.341581  716742 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 20:35:17.341648  716742 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 20:35:17.341652  716742 kubeadm.go:310] 
	I0904 20:35:17.341733  716742 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 20:35:17.341807  716742 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 20:35:17.341812  716742 kubeadm.go:310] 
	I0904 20:35:17.341912  716742 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mex69v.g494u4t2bbxooj6i \
	I0904 20:35:17.342013  716742 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6a9d6c5dd15cce5623c32315b379ca4db8b8a42e6190c248e6260d57259d6bc7 \
	I0904 20:35:17.342033  716742 kubeadm.go:310] 	--control-plane 
	I0904 20:35:17.342037  716742 kubeadm.go:310] 
	I0904 20:35:17.342119  716742 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 20:35:17.342123  716742 kubeadm.go:310] 
	I0904 20:35:17.342202  716742 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mex69v.g494u4t2bbxooj6i \
	I0904 20:35:17.342301  716742 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6a9d6c5dd15cce5623c32315b379ca4db8b8a42e6190c248e6260d57259d6bc7 
	I0904 20:35:17.347001  716742 kubeadm.go:310] W0904 20:35:01.336876    1192 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0904 20:35:17.347293  716742 kubeadm.go:310] W0904 20:35:01.337821    1192 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0904 20:35:17.347501  716742 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0904 20:35:17.347608  716742 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 20:35:17.347628  716742 cni.go:84] Creating CNI manager for ""
	I0904 20:35:17.347636  716742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:35:17.349758  716742 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0904 20:35:17.351534  716742 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0904 20:35:17.355486  716742 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0904 20:35:17.355512  716742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0904 20:35:17.375340  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 20:35:17.651571  716742 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 20:35:17.651721  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:17.651808  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-057989 minikube.k8s.io/updated_at=2024_09_04T20_35_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af minikube.k8s.io/name=addons-057989 minikube.k8s.io/primary=true
	I0904 20:35:17.806018  716742 ops.go:34] apiserver oom_adj: -16
	I0904 20:35:17.806125  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:18.306818  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:18.807224  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:19.306261  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:19.806958  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:20.307070  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:20.806301  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:21.306858  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:21.806305  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:21.906144  716742 kubeadm.go:1113] duration metric: took 4.254470959s to wait for elevateKubeSystemPrivileges
	I0904 20:35:21.906172  716742 kubeadm.go:394] duration metric: took 20.758516173s to StartCluster
	I0904 20:35:21.906191  716742 settings.go:142] acquiring lock: {Name:mk78ce0fd69886ee058af8e675a61cdabc51cba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:35:21.906305  716742 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 20:35:21.906748  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/kubeconfig: {Name:mk99c3c6b541fdaa941aef3f7a9cb265a3595a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:35:21.906950  716742 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:35:21.907130  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 20:35:21.907400  716742 config.go:182] Loaded profile config "addons-057989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 20:35:21.907438  716742 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0904 20:35:21.907532  716742 addons.go:69] Setting yakd=true in profile "addons-057989"
	I0904 20:35:21.907555  716742 addons.go:234] Setting addon yakd=true in "addons-057989"
	I0904 20:35:21.907606  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.908075  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.908362  716742 addons.go:69] Setting inspektor-gadget=true in profile "addons-057989"
	I0904 20:35:21.908387  716742 addons.go:234] Setting addon inspektor-gadget=true in "addons-057989"
	I0904 20:35:21.908419  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.908817  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.909134  716742 addons.go:69] Setting metrics-server=true in profile "addons-057989"
	I0904 20:35:21.909160  716742 addons.go:234] Setting addon metrics-server=true in "addons-057989"
	I0904 20:35:21.909185  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.909577  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.912812  716742 addons.go:69] Setting cloud-spanner=true in profile "addons-057989"
	I0904 20:35:21.912905  716742 addons.go:234] Setting addon cloud-spanner=true in "addons-057989"
	I0904 20:35:21.912980  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.913489  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913871  716742 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-057989"
	I0904 20:35:21.928862  716742 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-057989"
	I0904 20:35:21.928901  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.929448  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913884  716742 addons.go:69] Setting default-storageclass=true in profile "addons-057989"
	I0904 20:35:21.943849  716742 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-057989"
	I0904 20:35:21.944216  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913889  716742 addons.go:69] Setting gcp-auth=true in profile "addons-057989"
	I0904 20:35:21.955803  716742 mustload.go:65] Loading cluster: addons-057989
	I0904 20:35:21.956040  716742 config.go:182] Loaded profile config "addons-057989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 20:35:21.956384  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913901  716742 addons.go:69] Setting ingress=true in profile "addons-057989"
	I0904 20:35:21.966350  716742 addons.go:234] Setting addon ingress=true in "addons-057989"
	I0904 20:35:21.966464  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.969340  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913908  716742 addons.go:69] Setting ingress-dns=true in profile "addons-057989"
	I0904 20:35:21.971332  716742 addons.go:234] Setting addon ingress-dns=true in "addons-057989"
	I0904 20:35:21.971431  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.971919  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.916107  716742 out.go:177] * Verifying Kubernetes components...
	I0904 20:35:21.999362  716742 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0904 20:35:21.917099  716742 addons.go:69] Setting storage-provisioner=true in profile "addons-057989"
	I0904 20:35:21.917117  716742 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-057989"
	I0904 20:35:21.917124  716742 addons.go:69] Setting registry=true in profile "addons-057989"
	I0904 20:35:22.006635  716742 addons.go:234] Setting addon registry=true in "addons-057989"
	I0904 20:35:21.917132  716742 addons.go:69] Setting volcano=true in profile "addons-057989"
	I0904 20:35:21.917139  716742 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-057989"
	I0904 20:35:22.006816  716742 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-057989"
	I0904 20:35:22.026442  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.917152  716742 addons.go:69] Setting volumesnapshots=true in profile "addons-057989"
	I0904 20:35:22.041769  716742 addons.go:234] Setting addon volumesnapshots=true in "addons-057989"
	I0904 20:35:22.041853  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.042370  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.026070  716742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:35:22.006570  716742 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-057989"
	I0904 20:35:22.052266  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.053017  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.026105  716742 addons.go:234] Setting addon storage-provisioner=true in "addons-057989"
	I0904 20:35:22.060087  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.006737  716742 addons.go:234] Setting addon volcano=true in "addons-057989"
	I0904 20:35:22.062161  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.062789  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.026143  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.068254  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0904 20:35:22.068273  716742 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0904 20:35:22.068332  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.080628  716742 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0904 20:35:22.083423  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.096846  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.117959  716742 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0904 20:35:22.117983  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0904 20:35:22.118049  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.132214  716742 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0904 20:35:22.133366  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.138078  716742 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0904 20:35:22.138539  716742 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0904 20:35:22.138556  716742 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0904 20:35:22.138623  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.166630  716742 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 20:35:22.166698  716742 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 20:35:22.166806  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.187781  716742 addons.go:234] Setting addon default-storageclass=true in "addons-057989"
	I0904 20:35:22.187824  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.190235  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.232774  716742 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0904 20:35:22.247605  716742 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:35:22.247675  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0904 20:35:22.247754  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.264137  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0904 20:35:22.299532  716742 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 20:35:22.266024  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 20:35:22.300947  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	W0904 20:35:22.302072  716742 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0904 20:35:22.306600  716742 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0904 20:35:22.308467  716742 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:35:22.308490  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0904 20:35:22.308561  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.309318  716742 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 20:35:22.313375  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0904 20:35:22.317741  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0904 20:35:22.317922  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0904 20:35:22.317937  716742 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0904 20:35:22.318014  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.330389  716742 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-057989"
	I0904 20:35:22.330438  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.330871  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.340063  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0904 20:35:22.340202  716742 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 20:35:22.340246  716742 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0904 20:35:22.342136  716742 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:35:22.342209  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 20:35:22.342312  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.352340  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0904 20:35:22.352873  716742 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:35:22.352890  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0904 20:35:22.352954  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.359146  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0904 20:35:22.359286  716742 out.go:177]   - Using image docker.io/registry:2.8.3
	I0904 20:35:22.360580  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.364471  716742 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0904 20:35:22.364583  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0904 20:35:22.369625  716742 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0904 20:35:22.369649  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0904 20:35:22.369712  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.374826  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0904 20:35:22.376699  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0904 20:35:22.378465  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0904 20:35:22.378495  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0904 20:35:22.378570  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.406541  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.423195  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.433564  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.450279  716742 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 20:35:22.450300  716742 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 20:35:22.450362  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.497954  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.502029  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.528022  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.537365  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.538274  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.539752  716742 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0904 20:35:22.543162  716742 out.go:177]   - Using image docker.io/busybox:stable
	I0904 20:35:22.544052  716742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:35:22.545019  716742 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:35:22.545046  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0904 20:35:22.545106  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.545573  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.561427  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	W0904 20:35:22.564313  716742 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0904 20:35:22.564341  716742 retry.go:31] will retry after 236.417212ms: ssh: handshake failed: EOF
	I0904 20:35:22.575605  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.736511  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0904 20:35:22.736588  716742 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0904 20:35:22.857432  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0904 20:35:22.895253  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0904 20:35:22.895325  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0904 20:35:22.905358  716742 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0904 20:35:22.905429  716742 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0904 20:35:22.907620  716742 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 20:35:22.907687  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0904 20:35:22.925747  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:35:22.930451  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:35:22.934382  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0904 20:35:22.934456  716742 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0904 20:35:22.971971  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0904 20:35:22.972050  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0904 20:35:22.989798  716742 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:35:22.989897  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0904 20:35:22.992576  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:35:22.994880  716742 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0904 20:35:22.994953  716742 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0904 20:35:23.022131  716742 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0904 20:35:23.022204  716742 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0904 20:35:23.038813  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:35:23.070590  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0904 20:35:23.070665  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0904 20:35:23.070896  716742 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 20:35:23.070951  716742 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0904 20:35:23.095955  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:35:23.124642  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 20:35:23.186119  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0904 20:35:23.186188  716742 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0904 20:35:23.224395  716742 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0904 20:35:23.224470  716742 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0904 20:35:23.228123  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:35:23.241258  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0904 20:35:23.241328  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0904 20:35:23.258884  716742 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0904 20:35:23.258919  716742 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0904 20:35:23.288949  716742 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:35:23.288988  716742 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 20:35:23.332003  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:35:23.332068  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0904 20:35:23.378638  716742 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0904 20:35:23.378709  716742 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0904 20:35:23.434112  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:35:23.459281  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0904 20:35:23.459356  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0904 20:35:23.472898  716742 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0904 20:35:23.472962  716742 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0904 20:35:23.482949  716742 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0904 20:35:23.483014  716742 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0904 20:35:23.532337  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:35:23.556346  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0904 20:35:23.556420  716742 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0904 20:35:23.626276  716742 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0904 20:35:23.626351  716742 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0904 20:35:23.629728  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0904 20:35:23.629797  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0904 20:35:23.681098  716742 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:35:23.681173  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0904 20:35:23.724178  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0904 20:35:23.724253  716742 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0904 20:35:23.735934  716742 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0904 20:35:23.736008  716742 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0904 20:35:23.757739  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:35:23.792226  716742 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0904 20:35:23.792306  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0904 20:35:23.835787  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0904 20:35:23.835863  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0904 20:35:23.900406  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0904 20:35:23.962593  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0904 20:35:23.962673  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0904 20:35:24.071188  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:35:24.071310  716742 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0904 20:35:24.264803  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:35:26.048113  716742 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.748363933s)
	I0904 20:35:26.048143  716742 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0904 20:35:26.048457  716742 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.504382184s)
	I0904 20:35:26.050163  716742 node_ready.go:35] waiting up to 6m0s for node "addons-057989" to be "Ready" ...
	I0904 20:35:26.688497  716742 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-057989" context rescaled to 1 replicas
	I0904 20:35:27.060768  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.203246964s)
	I0904 20:35:27.060885  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.135070848s)
	I0904 20:35:28.027815  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.035163919s)
	I0904 20:35:28.027966  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.097438105s)
	I0904 20:35:28.028181  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.989297484s)
	I0904 20:35:28.075309  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:29.030465  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.934428715s)
	I0904 20:35:29.030503  716742 addons.go:475] Verifying addon ingress=true in "addons-057989"
	I0904 20:35:29.030752  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.906040067s)
	I0904 20:35:29.031105  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.802909693s)
	I0904 20:35:29.031127  716742 addons.go:475] Verifying addon registry=true in "addons-057989"
	I0904 20:35:29.031230  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.498818017s)
	I0904 20:35:29.031251  716742 addons.go:475] Verifying addon metrics-server=true in "addons-057989"
	I0904 20:35:29.031166  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.596978997s)
	I0904 20:35:29.033224  716742 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-057989 service yakd-dashboard -n yakd-dashboard
	
	I0904 20:35:29.033237  716742 out.go:177] * Verifying ingress addon...
	I0904 20:35:29.033252  716742 out.go:177] * Verifying registry addon...
	I0904 20:35:29.037301  716742 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0904 20:35:29.038325  716742 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0904 20:35:29.097659  716742 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 20:35:29.097684  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:29.098201  716742 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 20:35:29.098256  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:29.195841  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.438010547s)
	W0904 20:35:29.195882  716742 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:35:29.195933  716742 retry.go:31] will retry after 325.249505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:35:29.196029  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.295502009s)
	I0904 20:35:29.437031  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.172116276s)
	I0904 20:35:29.437112  716742 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-057989"
	I0904 20:35:29.440223  716742 out.go:177] * Verifying csi-hostpath-driver addon...
	I0904 20:35:29.443551  716742 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0904 20:35:29.448779  716742 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 20:35:29.448848  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:29.521627  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:35:29.578076  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:29.580777  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:29.958657  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:30.072439  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:30.090922  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:30.097632  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:30.448445  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:30.549658  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:30.550969  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:30.807018  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.285307799s)
	I0904 20:35:30.955804  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:31.046561  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:31.047254  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:31.448608  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:31.549691  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:31.550127  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:31.947929  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:32.048692  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:32.051282  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:32.448193  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:32.549459  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:32.552498  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:32.555459  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:32.731032  716742 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0904 20:35:32.731194  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:32.758991  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:32.884535  716742 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0904 20:35:32.939016  716742 addons.go:234] Setting addon gcp-auth=true in "addons-057989"
	I0904 20:35:32.939069  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:32.939529  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:32.953253  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:32.961330  716742 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0904 20:35:32.961383  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:32.995512  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:33.054398  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:33.055434  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:33.128086  716742 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0904 20:35:33.129944  716742 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 20:35:33.131593  716742 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0904 20:35:33.131723  716742 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0904 20:35:33.165445  716742 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0904 20:35:33.165474  716742 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0904 20:35:33.188075  716742 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:35:33.188102  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0904 20:35:33.209359  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:35:33.448116  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:33.543925  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:33.544466  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:33.820050  716742 addons.go:475] Verifying addon gcp-auth=true in "addons-057989"
	I0904 20:35:33.822068  716742 out.go:177] * Verifying gcp-auth addon...
	I0904 20:35:33.824433  716742 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0904 20:35:33.848245  716742 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0904 20:35:33.848266  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:33.947852  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:34.042923  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:34.043825  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:34.328229  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:34.447947  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:34.540311  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:34.542876  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:34.828284  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:34.947534  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:35.042636  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:35.049332  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:35.055122  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:35.330404  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:35.447274  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:35.545340  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:35.546029  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:35.828501  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:35.947228  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:36.063530  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:36.064339  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:36.328724  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:36.448339  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:36.540949  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:36.542310  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:36.827715  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:36.947278  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:37.043049  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:37.043853  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:37.327744  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:37.447836  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:37.540336  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:37.542627  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:37.554000  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:37.828004  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:37.947413  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:38.040942  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:38.043212  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:38.328167  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:38.448210  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:38.540971  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:38.542858  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:38.828183  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:38.948377  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:39.040766  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:39.043219  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:39.327632  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:39.447836  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:39.540398  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:39.542110  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:39.827478  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:39.946894  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:40.070189  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:40.070554  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:40.076822  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:40.328253  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:40.447552  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:40.541741  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:40.542536  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:40.827594  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:40.947071  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:41.040958  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:41.042547  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:41.328122  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:41.447352  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:41.542033  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:41.542923  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:41.828247  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:41.947505  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:42.042016  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:42.042953  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:42.328660  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:42.449670  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:42.541520  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:42.542951  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:42.554073  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:42.828426  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:42.946914  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:43.041666  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:43.043506  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:43.327678  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:43.447222  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:43.540542  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:43.542481  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:43.827880  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:43.947351  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:44.041905  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:44.042784  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:44.327910  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:44.447338  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:44.540692  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:44.541928  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:44.554128  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:44.827401  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:44.947350  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:45.041914  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:45.046397  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:45.329483  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:45.447350  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:45.541340  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:45.542595  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:45.829160  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:45.947522  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:46.043569  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:46.043935  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:46.327999  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:46.446905  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:46.540678  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:46.542173  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:46.554289  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:46.828320  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:46.947897  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:47.041203  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:47.043417  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:47.327978  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:47.447928  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:47.541679  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:47.542621  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:47.827779  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:47.947381  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:48.042174  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:48.043408  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:48.327968  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:48.447259  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:48.540821  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:48.543687  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:48.829248  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:48.947876  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:49.040634  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:49.042550  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:49.053837  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:49.328536  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:49.446861  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:49.542397  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:49.542833  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:49.828476  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:49.947595  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:50.041512  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:50.045789  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:50.327937  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:50.447163  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:50.541487  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:50.542220  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:50.827853  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:50.947721  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:51.046096  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:51.047230  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:51.054679  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:51.328388  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:51.447569  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:51.541997  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:51.543417  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:51.827469  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:51.947631  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:52.041212  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:52.042374  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:52.328608  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:52.447910  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:52.540711  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:52.542265  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:52.827999  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:52.947770  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:53.041301  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:53.044146  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:53.328745  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:53.448068  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:53.541574  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:53.542484  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:53.553703  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:53.828116  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:53.948990  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:54.053696  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:54.054631  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:54.328538  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:54.447218  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:54.540711  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:54.542591  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:54.828305  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:54.948206  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:55.053401  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:55.058261  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:55.328289  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:55.447828  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:55.542014  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:55.543655  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:55.553772  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:55.827440  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:55.953905  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:56.041609  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:56.042737  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:56.327941  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:56.447334  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:56.541509  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:56.542497  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:56.828523  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:56.948026  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:57.040845  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:57.043079  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:57.329593  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:57.447143  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:57.542303  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:57.543506  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:57.553805  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:57.827884  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:57.946797  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:58.042695  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:58.043278  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:58.328009  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:58.447726  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:58.545605  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:58.548895  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:58.828520  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:58.947520  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:59.041341  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:59.042204  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:59.327308  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:59.447465  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:59.542482  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:59.542723  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:59.827941  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:59.946996  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:00.043176  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:00.104534  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:00.105059  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:36:00.340965  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:00.449041  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:00.551366  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:00.579002  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:00.828953  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:00.947554  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:01.040695  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:01.042741  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:01.328717  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:01.447395  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:01.540951  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:01.543418  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:01.828763  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:01.947431  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:02.046929  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:02.047190  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:02.327715  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:02.447709  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:02.541086  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:02.542605  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:02.553881  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:36:02.828768  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:02.947541  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:03.042304  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:03.043125  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:03.328588  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:03.448032  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:03.541497  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:03.544175  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:03.828676  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:03.947371  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:04.044486  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:04.045078  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:04.327521  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:04.447671  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:04.541369  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:04.542203  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:04.828334  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:04.946895  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:05.045224  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:05.055073  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:05.062604  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:36:05.327935  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:05.447801  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:05.542368  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:05.542768  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:05.827526  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:05.947604  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:06.048078  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:06.049923  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:06.330381  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:06.447521  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:06.542229  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:06.542596  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:06.828109  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:06.947118  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:07.040788  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:07.042886  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:07.328337  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:07.447866  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:07.541481  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:07.542481  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:07.553750  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:36:07.827705  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:07.948008  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:08.042417  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:08.042859  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:08.328090  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:08.447525  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:08.540604  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:08.543433  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:08.837623  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:09.030014  716742 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 20:36:09.030047  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:09.071445  716742 node_ready.go:49] node "addons-057989" has status "Ready":"True"
	I0904 20:36:09.071472  716742 node_ready.go:38] duration metric: took 43.021269395s for node "addons-057989" to be "Ready" ...
	I0904 20:36:09.071484  716742 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 20:36:09.089105  716742 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 20:36:09.089125  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:09.090847  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:09.095775  716742 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-k9k5f" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:09.350645  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:09.533278  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:09.574617  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:09.575871  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:09.828155  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:09.978224  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:10.129591  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:10.131040  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:10.143649  716742 pod_ready.go:93] pod "coredns-6f6b679f8f-k9k5f" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.143680  716742 pod_ready.go:82] duration metric: took 1.047863266s for pod "coredns-6f6b679f8f-k9k5f" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.143707  716742 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.159088  716742 pod_ready.go:93] pod "etcd-addons-057989" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.159117  716742 pod_ready.go:82] duration metric: took 15.402507ms for pod "etcd-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.159133  716742 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.171488  716742 pod_ready.go:93] pod "kube-apiserver-addons-057989" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.171518  716742 pod_ready.go:82] duration metric: took 12.375537ms for pod "kube-apiserver-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.171532  716742 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.178648  716742 pod_ready.go:93] pod "kube-controller-manager-addons-057989" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.178676  716742 pod_ready.go:82] duration metric: took 7.13601ms for pod "kube-controller-manager-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.178691  716742 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nc7jl" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.254762  716742 pod_ready.go:93] pod "kube-proxy-nc7jl" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.254796  716742 pod_ready.go:82] duration metric: took 76.096913ms for pod "kube-proxy-nc7jl" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.254811  716742 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.328843  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:10.449765  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:10.544678  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:10.545672  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:10.654072  716742 pod_ready.go:93] pod "kube-scheduler-addons-057989" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.654096  716742 pod_ready.go:82] duration metric: took 399.277222ms for pod "kube-scheduler-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.654108  716742 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.829645  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:10.950366  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:11.050101  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:11.050780  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:11.328571  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:11.449457  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:11.542566  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:11.546248  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:11.830499  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:11.950704  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:12.054205  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:12.055128  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:12.327897  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:12.449726  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:12.543917  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:12.544796  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:12.661369  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:12.827882  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:12.949012  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:13.043831  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:13.046158  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:13.329536  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:13.450676  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:13.545035  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:13.545415  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:13.830985  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:13.948606  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:14.042504  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:14.048344  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:14.328120  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:14.450281  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:14.542904  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:14.544350  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:14.829872  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:14.951552  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:15.047571  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:15.048435  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:15.169903  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:15.329914  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:15.449286  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:15.548925  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:15.549916  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:15.828156  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:15.948710  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:16.055199  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:16.057193  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:16.328604  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:16.448365  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:16.542507  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:16.543480  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:16.828454  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:16.949666  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:17.043797  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:17.044826  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:17.329042  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:17.448956  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:17.550653  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:17.552483  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:17.662444  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:17.828887  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:17.949344  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:18.076252  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:18.077448  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:18.329262  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:18.450325  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:18.542144  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:18.544796  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:18.829083  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:18.949802  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:19.044416  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:19.045190  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:19.328890  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:19.449574  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:19.544186  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:19.544394  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:19.835752  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:19.949187  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:20.048527  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:20.049968  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:20.178776  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:20.328791  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:20.449953  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:20.555512  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:20.556916  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:20.831683  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:20.948574  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:21.044140  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:21.048581  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:21.329130  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:21.450562  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:21.549963  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:21.550903  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:21.829631  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:21.949015  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:22.046374  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:22.047617  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:22.328800  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:22.449023  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:22.542511  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:22.544223  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:22.660408  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:22.828655  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:22.949624  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:23.044461  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:23.046219  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:23.328978  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:23.448751  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:23.545036  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:23.546547  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:23.828770  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:23.949131  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:24.044659  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:24.044828  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:24.328713  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:24.448992  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:24.543975  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:24.544525  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:24.828665  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:24.948789  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:25.044058  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:25.045416  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:25.177094  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:25.329520  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:25.448758  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:25.544315  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:25.546862  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:25.829047  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:25.949309  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:26.042483  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:26.042805  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:26.327729  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:26.448903  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:26.546711  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:26.551153  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:26.829442  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:26.949733  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:27.046187  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:27.046298  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:27.328645  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:27.450636  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:27.546923  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:27.548930  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:27.661472  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:27.831563  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:27.951278  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:28.048305  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:28.050473  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:28.327740  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:28.448212  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:28.541352  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:28.544411  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:28.829661  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:28.949150  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:29.044402  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:29.045775  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:29.329086  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:29.460355  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:29.544885  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:29.547005  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:29.661692  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:29.829876  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:29.949946  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:30.054365  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:30.068184  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:30.329170  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:30.450074  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:30.544607  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:30.545699  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:30.828795  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:30.951635  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:31.043816  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:31.045286  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:31.329490  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:31.449348  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:31.543204  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:31.544300  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:31.661927  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:31.828620  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:31.950251  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:32.046442  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:32.048445  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:32.330831  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:32.449125  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:32.542854  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:32.543884  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:32.828558  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:32.948725  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:33.042434  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:33.043885  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:33.330491  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:33.448583  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:33.542414  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:33.542780  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:33.828458  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:33.949029  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:34.055376  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:34.056938  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:34.167167  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:34.329045  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:34.453047  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:34.543030  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:34.546398  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:34.829227  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:34.949551  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:35.050127  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:35.053256  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:35.329695  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:35.448676  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:35.542169  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:35.544984  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:35.828200  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:35.948869  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:36.044814  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:36.057927  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:36.335920  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:36.450101  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:36.541680  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:36.543120  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:36.662952  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:36.828448  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:36.948348  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:37.066077  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:37.066409  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:37.328407  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:37.448523  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:37.542789  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:37.543300  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:37.828403  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:37.949498  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:38.043389  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:38.046624  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:38.328771  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:38.448599  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:38.542461  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:38.544745  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:38.828115  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:38.952286  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:39.042378  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:39.044673  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:39.174566  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:39.334517  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:39.453611  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:39.543024  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:39.544086  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:39.828352  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:39.950126  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:40.056483  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:40.061766  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:40.328992  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:40.461098  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:40.544321  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:40.546350  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:40.828978  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:40.948665  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:41.051314  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:41.058138  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:41.328917  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:41.449551  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:41.550313  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:41.551220  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:41.664986  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:41.828514  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:41.952465  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:42.058001  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:42.059479  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:42.331960  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:42.454692  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:42.546320  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:42.547473  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:42.829791  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:42.950245  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:43.044498  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:43.046219  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:43.328405  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:43.454381  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:43.543453  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:43.544110  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:43.828245  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:43.948674  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:44.042409  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:44.043849  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:44.162551  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:44.328009  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:44.448599  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:44.543132  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:44.545358  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:44.828792  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:44.948747  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:45.087640  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:45.088284  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:45.344816  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:45.449644  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:45.544359  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:45.545990  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:45.829341  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:45.949813  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:46.058111  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:46.058856  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:46.170340  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:46.328474  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:46.449570  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:46.544097  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:46.545673  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:46.829642  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:46.949053  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:47.044822  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:47.046478  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:47.328302  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:47.449782  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:47.546336  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:47.551098  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:47.831960  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:47.949636  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:48.060368  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:48.060851  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:48.329136  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:48.449568  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:48.544057  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:48.544859  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:48.663951  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:48.829109  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:48.949424  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:49.044052  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:49.045593  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:49.333734  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:49.450504  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:49.545632  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:49.547608  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:49.828093  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:49.950097  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:50.069427  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:50.086326  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:50.331658  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:50.449560  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:50.542436  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:50.547014  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:50.832752  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:50.952128  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:51.047186  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:51.050572  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:51.163570  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:51.328417  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:51.449441  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:51.544087  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:51.544380  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:51.829033  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:51.949061  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:52.045024  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:52.045918  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:52.328926  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:52.448578  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:52.541879  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:52.542126  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:52.830078  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:52.963921  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:53.041818  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:53.044116  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:53.170106  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:53.328609  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:53.448820  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:53.544307  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:53.545486  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:53.829016  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:53.948618  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:54.041664  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:54.044249  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:54.328767  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:54.449124  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:54.541717  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:54.543353  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:54.828607  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:54.948470  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:55.051321  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:55.054553  716742 kapi.go:107] duration metric: took 1m26.017250397s to wait for kubernetes.io/minikube-addons=registry ...
	I0904 20:36:55.328521  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:55.448501  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:55.543065  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:55.660302  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:55.827945  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:55.951015  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:56.045384  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:56.332183  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:56.449054  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:56.546373  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:56.830492  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:56.949915  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:57.044356  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:57.328294  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:57.449445  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:57.544569  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:57.674148  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:57.830780  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:57.950151  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:58.046701  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:58.331040  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:58.448712  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:58.544968  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:58.828987  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:58.948346  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:59.042906  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:59.332382  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:59.449048  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:59.543704  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:59.832339  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:59.948917  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:00.044377  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:00.222015  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:00.329414  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:00.487456  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:00.546702  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:00.828580  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:00.950943  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:01.043973  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:01.330576  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:01.448174  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:01.542648  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:01.860366  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:01.965465  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:02.052101  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:02.333323  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:02.449358  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:02.543801  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:02.669696  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:02.838776  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:02.951038  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:03.049985  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:03.327773  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:03.448941  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:03.542817  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:03.828613  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:03.948689  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:04.046396  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:04.328336  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:04.449088  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:04.543775  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:04.828884  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:04.952546  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:05.068506  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:05.207353  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:05.328427  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:05.448931  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:05.543510  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:05.828526  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:05.951448  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:06.047048  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:06.328134  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:06.449585  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:06.543072  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:06.829018  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:06.950732  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:07.044010  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:07.328664  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:07.448981  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:07.542915  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:07.661203  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:07.828185  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:07.953699  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:08.043242  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:08.328648  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:08.448488  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:08.543420  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:08.827853  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:08.956859  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:09.045880  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:09.335624  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:09.450095  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:09.543668  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:09.835572  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:09.950374  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:10.053252  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:10.166375  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:10.328364  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:10.449574  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:10.543407  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:10.828502  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:10.949028  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:11.042952  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:11.329417  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:11.453866  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:11.544063  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:11.829066  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:11.950959  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:12.047688  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:12.178429  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:12.337289  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:12.449041  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:12.543497  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:12.829095  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:12.951547  716742 kapi.go:107] duration metric: took 1m43.507997526s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0904 20:37:13.048935  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:13.335226  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:13.543922  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:13.828005  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:14.043281  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:14.328415  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:14.543294  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:14.660776  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:14.828685  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:15.072923  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:15.330103  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:15.542664  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:15.829204  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:16.058275  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:16.328850  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:16.542954  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:16.828673  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:17.042581  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:17.163249  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:17.327534  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:17.543624  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:17.827957  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:18.045331  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:18.329952  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:18.543428  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:18.829780  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:19.043830  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:19.168137  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:19.328673  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:19.544966  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:19.828409  716742 kapi.go:107] duration metric: took 1m46.003970775s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0904 20:37:19.830433  716742 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-057989 cluster.
	I0904 20:37:19.832052  716742 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0904 20:37:19.833771  716742 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0904 20:37:20.045906  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:20.556992  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:21.043726  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:21.173518  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:21.543224  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:22.045682  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:22.543970  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:23.045347  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:23.174483  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:23.543944  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:24.045448  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:24.544024  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:25.074393  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:25.543997  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:25.661982  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:26.066374  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:26.543191  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:27.043090  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:27.542940  716742 kapi.go:107] duration metric: took 1m58.504613312s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0904 20:37:27.545069  716742 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0904 20:37:27.546958  716742 addons.go:510] duration metric: took 2m5.639519055s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner storage-provisioner-rancher metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0904 20:37:28.163426  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:30.166696  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:32.661223  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:35.164273  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:37.660899  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:40.164467  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:42.166438  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:44.661375  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:46.662396  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:48.662560  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:49.660114  716742 pod_ready.go:93] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"True"
	I0904 20:37:49.660145  716742 pod_ready.go:82] duration metric: took 1m39.006028182s for pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace to be "Ready" ...
	I0904 20:37:49.660158  716742 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-hxn5k" in "kube-system" namespace to be "Ready" ...
	I0904 20:37:49.666071  716742 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-hxn5k" in "kube-system" namespace has status "Ready":"True"
	I0904 20:37:49.666099  716742 pod_ready.go:82] duration metric: took 5.93149ms for pod "nvidia-device-plugin-daemonset-hxn5k" in "kube-system" namespace to be "Ready" ...
	I0904 20:37:49.666121  716742 pod_ready.go:39] duration metric: took 1m40.594604615s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 20:37:49.666138  716742 api_server.go:52] waiting for apiserver process to appear ...
	I0904 20:37:49.666166  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0904 20:37:49.666227  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0904 20:37:49.723728  716742 cri.go:89] found id: "8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:37:49.723759  716742 cri.go:89] found id: ""
	I0904 20:37:49.723767  716742 logs.go:276] 1 containers: [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59]
	I0904 20:37:49.723827  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.727548  716742 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0904 20:37:49.727628  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0904 20:37:49.775692  716742 cri.go:89] found id: "4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:37:49.775716  716742 cri.go:89] found id: ""
	I0904 20:37:49.775725  716742 logs.go:276] 1 containers: [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971]
	I0904 20:37:49.775781  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.779581  716742 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0904 20:37:49.779678  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0904 20:37:49.819669  716742 cri.go:89] found id: "2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:37:49.819693  716742 cri.go:89] found id: ""
	I0904 20:37:49.819702  716742 logs.go:276] 1 containers: [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b]
	I0904 20:37:49.819758  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.823267  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0904 20:37:49.823362  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0904 20:37:49.862094  716742 cri.go:89] found id: "d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:37:49.862116  716742 cri.go:89] found id: ""
	I0904 20:37:49.862124  716742 logs.go:276] 1 containers: [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608]
	I0904 20:37:49.862225  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.865865  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0904 20:37:49.865988  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0904 20:37:49.907687  716742 cri.go:89] found id: "13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:37:49.907711  716742 cri.go:89] found id: ""
	I0904 20:37:49.907720  716742 logs.go:276] 1 containers: [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5]
	I0904 20:37:49.907804  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.911524  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0904 20:37:49.911619  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0904 20:37:49.963560  716742 cri.go:89] found id: "7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:37:49.963588  716742 cri.go:89] found id: ""
	I0904 20:37:49.963595  716742 logs.go:276] 1 containers: [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743]
	I0904 20:37:49.963722  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.967436  716742 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0904 20:37:49.967512  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0904 20:37:50.027766  716742 cri.go:89] found id: "508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:37:50.027790  716742 cri.go:89] found id: ""
	I0904 20:37:50.027799  716742 logs.go:276] 1 containers: [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706]
	I0904 20:37:50.027863  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:50.049546  716742 logs.go:123] Gathering logs for dmesg ...
	I0904 20:37:50.049571  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 20:37:50.105332  716742 logs.go:123] Gathering logs for coredns [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b] ...
	I0904 20:37:50.105413  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:37:50.221432  716742 logs.go:123] Gathering logs for kube-scheduler [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608] ...
	I0904 20:37:50.221473  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:37:50.276905  716742 logs.go:123] Gathering logs for CRI-O ...
	I0904 20:37:50.276941  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0904 20:37:50.382777  716742 logs.go:123] Gathering logs for kube-controller-manager [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743] ...
	I0904 20:37:50.382817  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:37:50.476194  716742 logs.go:123] Gathering logs for kindnet [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706] ...
	I0904 20:37:50.476232  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:37:50.522576  716742 logs.go:123] Gathering logs for container status ...
	I0904 20:37:50.522612  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 20:37:50.578692  716742 logs.go:123] Gathering logs for kubelet ...
	I0904 20:37:50.578725  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0904 20:37:50.609335  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:37:50.609580  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:37:50.671992  716742 logs.go:123] Gathering logs for describe nodes ...
	I0904 20:37:50.672029  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 20:37:50.867431  716742 logs.go:123] Gathering logs for kube-apiserver [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59] ...
	I0904 20:37:50.867459  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:37:50.932411  716742 logs.go:123] Gathering logs for etcd [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971] ...
	I0904 20:37:50.932448  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:37:51.003529  716742 logs.go:123] Gathering logs for kube-proxy [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5] ...
	I0904 20:37:51.003585  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:37:51.053994  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:37:51.054031  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0904 20:37:51.054160  716742 out.go:270] X Problems detected in kubelet:
	W0904 20:37:51.054202  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:37:51.054228  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:37:51.054237  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:37:51.054276  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:38:01.054577  716742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 20:38:01.068412  716742 api_server.go:72] duration metric: took 2m39.161430852s to wait for apiserver process to appear ...
	I0904 20:38:01.068486  716742 api_server.go:88] waiting for apiserver healthz status ...
	I0904 20:38:01.068530  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0904 20:38:01.068610  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0904 20:38:01.106967  716742 cri.go:89] found id: "8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:38:01.106991  716742 cri.go:89] found id: ""
	I0904 20:38:01.106998  716742 logs.go:276] 1 containers: [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59]
	I0904 20:38:01.107057  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.110991  716742 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0904 20:38:01.111071  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0904 20:38:01.160285  716742 cri.go:89] found id: "4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:38:01.160309  716742 cri.go:89] found id: ""
	I0904 20:38:01.160316  716742 logs.go:276] 1 containers: [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971]
	I0904 20:38:01.160377  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.164548  716742 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0904 20:38:01.164621  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0904 20:38:01.214500  716742 cri.go:89] found id: "2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:38:01.214527  716742 cri.go:89] found id: ""
	I0904 20:38:01.214536  716742 logs.go:276] 1 containers: [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b]
	I0904 20:38:01.214599  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.218732  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0904 20:38:01.218808  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0904 20:38:01.261426  716742 cri.go:89] found id: "d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:38:01.261458  716742 cri.go:89] found id: ""
	I0904 20:38:01.261468  716742 logs.go:276] 1 containers: [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608]
	I0904 20:38:01.261535  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.265381  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0904 20:38:01.265456  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0904 20:38:01.304546  716742 cri.go:89] found id: "13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:38:01.304570  716742 cri.go:89] found id: ""
	I0904 20:38:01.304578  716742 logs.go:276] 1 containers: [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5]
	I0904 20:38:01.304635  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.308267  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0904 20:38:01.308344  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0904 20:38:01.348771  716742 cri.go:89] found id: "7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:38:01.348801  716742 cri.go:89] found id: ""
	I0904 20:38:01.348811  716742 logs.go:276] 1 containers: [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743]
	I0904 20:38:01.348873  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.353679  716742 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0904 20:38:01.353756  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0904 20:38:01.395097  716742 cri.go:89] found id: "508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:38:01.395119  716742 cri.go:89] found id: ""
	I0904 20:38:01.395127  716742 logs.go:276] 1 containers: [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706]
	I0904 20:38:01.395200  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.399164  716742 logs.go:123] Gathering logs for CRI-O ...
	I0904 20:38:01.399196  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0904 20:38:01.498148  716742 logs.go:123] Gathering logs for container status ...
	I0904 20:38:01.498186  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 20:38:01.560705  716742 logs.go:123] Gathering logs for kubelet ...
	I0904 20:38:01.560738  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0904 20:38:01.587961  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:38:01.588204  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:38:01.651595  716742 logs.go:123] Gathering logs for kube-apiserver [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59] ...
	I0904 20:38:01.651629  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:38:01.724603  716742 logs.go:123] Gathering logs for kube-scheduler [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608] ...
	I0904 20:38:01.724637  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:38:01.785389  716742 logs.go:123] Gathering logs for kindnet [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706] ...
	I0904 20:38:01.785426  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:38:01.826786  716742 logs.go:123] Gathering logs for kube-proxy [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5] ...
	I0904 20:38:01.826821  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:38:01.866479  716742 logs.go:123] Gathering logs for kube-controller-manager [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743] ...
	I0904 20:38:01.866509  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:38:01.938042  716742 logs.go:123] Gathering logs for dmesg ...
	I0904 20:38:01.938147  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 20:38:01.964182  716742 logs.go:123] Gathering logs for describe nodes ...
	I0904 20:38:01.964208  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 20:38:02.154346  716742 logs.go:123] Gathering logs for etcd [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971] ...
	I0904 20:38:02.154476  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:38:02.217734  716742 logs.go:123] Gathering logs for coredns [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b] ...
	I0904 20:38:02.217780  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:38:02.287714  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:38:02.287744  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0904 20:38:02.287830  716742 out.go:270] X Problems detected in kubelet:
	W0904 20:38:02.287844  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:38:02.287878  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:38:02.287888  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:38:02.287901  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:38:12.289049  716742 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 20:38:12.297611  716742 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0904 20:38:12.298713  716742 api_server.go:141] control plane version: v1.31.0
	I0904 20:38:12.298744  716742 api_server.go:131] duration metric: took 11.230244619s to wait for apiserver health ...
	I0904 20:38:12.298754  716742 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 20:38:12.298777  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0904 20:38:12.298845  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0904 20:38:12.341281  716742 cri.go:89] found id: "8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:38:12.341303  716742 cri.go:89] found id: ""
	I0904 20:38:12.341311  716742 logs.go:276] 1 containers: [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59]
	I0904 20:38:12.341369  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.345210  716742 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0904 20:38:12.345295  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0904 20:38:12.384841  716742 cri.go:89] found id: "4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:38:12.384863  716742 cri.go:89] found id: ""
	I0904 20:38:12.384871  716742 logs.go:276] 1 containers: [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971]
	I0904 20:38:12.384934  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.388824  716742 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0904 20:38:12.388897  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0904 20:38:12.432322  716742 cri.go:89] found id: "2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:38:12.432344  716742 cri.go:89] found id: ""
	I0904 20:38:12.432352  716742 logs.go:276] 1 containers: [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b]
	I0904 20:38:12.432410  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.436102  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0904 20:38:12.436180  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0904 20:38:12.474996  716742 cri.go:89] found id: "d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:38:12.475018  716742 cri.go:89] found id: ""
	I0904 20:38:12.475025  716742 logs.go:276] 1 containers: [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608]
	I0904 20:38:12.475087  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.478648  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0904 20:38:12.478726  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0904 20:38:12.522943  716742 cri.go:89] found id: "13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:38:12.523008  716742 cri.go:89] found id: ""
	I0904 20:38:12.523022  716742 logs.go:276] 1 containers: [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5]
	I0904 20:38:12.523085  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.526855  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0904 20:38:12.526930  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0904 20:38:12.577119  716742 cri.go:89] found id: "7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:38:12.577153  716742 cri.go:89] found id: ""
	I0904 20:38:12.577190  716742 logs.go:276] 1 containers: [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743]
	I0904 20:38:12.577249  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.580701  716742 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0904 20:38:12.580774  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0904 20:38:12.624944  716742 cri.go:89] found id: "508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:38:12.624967  716742 cri.go:89] found id: ""
	I0904 20:38:12.624975  716742 logs.go:276] 1 containers: [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706]
	I0904 20:38:12.625035  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.628574  716742 logs.go:123] Gathering logs for kindnet [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706] ...
	I0904 20:38:12.628599  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:38:12.672932  716742 logs.go:123] Gathering logs for dmesg ...
	I0904 20:38:12.672968  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 20:38:12.691130  716742 logs.go:123] Gathering logs for etcd [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971] ...
	I0904 20:38:12.691159  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:38:12.746973  716742 logs.go:123] Gathering logs for kube-scheduler [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608] ...
	I0904 20:38:12.747054  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:38:12.807676  716742 logs.go:123] Gathering logs for kube-proxy [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5] ...
	I0904 20:38:12.807724  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:38:12.855232  716742 logs.go:123] Gathering logs for kube-controller-manager [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743] ...
	I0904 20:38:12.855264  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:38:12.929481  716742 logs.go:123] Gathering logs for CRI-O ...
	I0904 20:38:12.929521  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0904 20:38:13.034293  716742 logs.go:123] Gathering logs for container status ...
	I0904 20:38:13.034341  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 20:38:13.089651  716742 logs.go:123] Gathering logs for kubelet ...
	I0904 20:38:13.089682  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0904 20:38:13.118436  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:38:13.118683  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:38:13.184420  716742 logs.go:123] Gathering logs for describe nodes ...
	I0904 20:38:13.184459  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 20:38:13.328501  716742 logs.go:123] Gathering logs for kube-apiserver [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59] ...
	I0904 20:38:13.328532  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:38:13.382614  716742 logs.go:123] Gathering logs for coredns [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b] ...
	I0904 20:38:13.382650  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:38:13.450831  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:38:13.450864  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0904 20:38:13.450946  716742 out.go:270] X Problems detected in kubelet:
	W0904 20:38:13.450959  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:38:13.450989  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:38:13.450998  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:38:13.451010  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:38:23.464733  716742 system_pods.go:59] 18 kube-system pods found
	I0904 20:38:23.464785  716742 system_pods.go:61] "coredns-6f6b679f8f-k9k5f" [275ab65d-8cdd-4e33-9a30-8e2dea82c08e] Running
	I0904 20:38:23.464791  716742 system_pods.go:61] "csi-hostpath-attacher-0" [415f2771-f4e0-4711-90b4-bbb3cd155351] Running
	I0904 20:38:23.464798  716742 system_pods.go:61] "csi-hostpath-resizer-0" [fcf10418-cc7b-4979-851d-4f6623df5536] Running
	I0904 20:38:23.464803  716742 system_pods.go:61] "csi-hostpathplugin-mn9qp" [0f3278f5-fc14-4f5d-a426-c25a64816e1c] Running
	I0904 20:38:23.464835  716742 system_pods.go:61] "etcd-addons-057989" [e11680b5-b6b4-44d1-bd13-f62d154e2a01] Running
	I0904 20:38:23.464846  716742 system_pods.go:61] "kindnet-xh95z" [0ad1e90a-ac7c-4bde-a26d-ff3f11c0f743] Running
	I0904 20:38:23.464851  716742 system_pods.go:61] "kube-apiserver-addons-057989" [5aea7959-e9f7-4ddd-8bd2-bac55b04b0c8] Running
	I0904 20:38:23.464856  716742 system_pods.go:61] "kube-controller-manager-addons-057989" [f9bdc1cc-474c-40cf-b9a6-04857fd1dcaf] Running
	I0904 20:38:23.464861  716742 system_pods.go:61] "kube-ingress-dns-minikube" [66349fc9-7ad4-480d-b82b-7fb460b850a2] Running
	I0904 20:38:23.464875  716742 system_pods.go:61] "kube-proxy-nc7jl" [43662cab-76d9-4759-9d5b-6f8c245fa417] Running
	I0904 20:38:23.464880  716742 system_pods.go:61] "kube-scheduler-addons-057989" [66094b5b-4131-480b-aff3-4f9187b9afa4] Running
	I0904 20:38:23.464885  716742 system_pods.go:61] "metrics-server-84c5f94fbc-fq2ps" [42462678-f110-4415-b2f1-367217f8c8a2] Running
	I0904 20:38:23.464903  716742 system_pods.go:61] "nvidia-device-plugin-daemonset-hxn5k" [e2ce6825-b8bf-4d5a-a77f-337ca9cd2e60] Running
	I0904 20:38:23.464907  716742 system_pods.go:61] "registry-6fb4cdfc84-q2v5x" [08b3698e-ab89-4393-846c-c4d5984ebe9e] Running
	I0904 20:38:23.464911  716742 system_pods.go:61] "registry-proxy-xfn95" [19eda952-0370-4c89-ad9f-fa2fcf34e855] Running
	I0904 20:38:23.464915  716742 system_pods.go:61] "snapshot-controller-56fcc65765-2nr7v" [e1ed8e39-dd7b-4cfb-bf3e-3ba5331286b1] Running
	I0904 20:38:23.464922  716742 system_pods.go:61] "snapshot-controller-56fcc65765-tcz8s" [16aa5513-c8b9-4e3b-9c63-2b9d9c64ef30] Running
	I0904 20:38:23.464927  716742 system_pods.go:61] "storage-provisioner" [12d1bdba-0302-4966-8175-e7542a9ae817] Running
	I0904 20:38:23.464937  716742 system_pods.go:74] duration metric: took 11.166175842s to wait for pod list to return data ...
	I0904 20:38:23.464949  716742 default_sa.go:34] waiting for default service account to be created ...
	I0904 20:38:23.467768  716742 default_sa.go:45] found service account: "default"
	I0904 20:38:23.467802  716742 default_sa.go:55] duration metric: took 2.843632ms for default service account to be created ...
	I0904 20:38:23.467813  716742 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 20:38:23.479242  716742 system_pods.go:86] 18 kube-system pods found
	I0904 20:38:23.479286  716742 system_pods.go:89] "coredns-6f6b679f8f-k9k5f" [275ab65d-8cdd-4e33-9a30-8e2dea82c08e] Running
	I0904 20:38:23.479295  716742 system_pods.go:89] "csi-hostpath-attacher-0" [415f2771-f4e0-4711-90b4-bbb3cd155351] Running
	I0904 20:38:23.479301  716742 system_pods.go:89] "csi-hostpath-resizer-0" [fcf10418-cc7b-4979-851d-4f6623df5536] Running
	I0904 20:38:23.479306  716742 system_pods.go:89] "csi-hostpathplugin-mn9qp" [0f3278f5-fc14-4f5d-a426-c25a64816e1c] Running
	I0904 20:38:23.479311  716742 system_pods.go:89] "etcd-addons-057989" [e11680b5-b6b4-44d1-bd13-f62d154e2a01] Running
	I0904 20:38:23.479317  716742 system_pods.go:89] "kindnet-xh95z" [0ad1e90a-ac7c-4bde-a26d-ff3f11c0f743] Running
	I0904 20:38:23.479321  716742 system_pods.go:89] "kube-apiserver-addons-057989" [5aea7959-e9f7-4ddd-8bd2-bac55b04b0c8] Running
	I0904 20:38:23.479332  716742 system_pods.go:89] "kube-controller-manager-addons-057989" [f9bdc1cc-474c-40cf-b9a6-04857fd1dcaf] Running
	I0904 20:38:23.479337  716742 system_pods.go:89] "kube-ingress-dns-minikube" [66349fc9-7ad4-480d-b82b-7fb460b850a2] Running
	I0904 20:38:23.479348  716742 system_pods.go:89] "kube-proxy-nc7jl" [43662cab-76d9-4759-9d5b-6f8c245fa417] Running
	I0904 20:38:23.479353  716742 system_pods.go:89] "kube-scheduler-addons-057989" [66094b5b-4131-480b-aff3-4f9187b9afa4] Running
	I0904 20:38:23.479359  716742 system_pods.go:89] "metrics-server-84c5f94fbc-fq2ps" [42462678-f110-4415-b2f1-367217f8c8a2] Running
	I0904 20:38:23.479367  716742 system_pods.go:89] "nvidia-device-plugin-daemonset-hxn5k" [e2ce6825-b8bf-4d5a-a77f-337ca9cd2e60] Running
	I0904 20:38:23.479371  716742 system_pods.go:89] "registry-6fb4cdfc84-q2v5x" [08b3698e-ab89-4393-846c-c4d5984ebe9e] Running
	I0904 20:38:23.479375  716742 system_pods.go:89] "registry-proxy-xfn95" [19eda952-0370-4c89-ad9f-fa2fcf34e855] Running
	I0904 20:38:23.479384  716742 system_pods.go:89] "snapshot-controller-56fcc65765-2nr7v" [e1ed8e39-dd7b-4cfb-bf3e-3ba5331286b1] Running
	I0904 20:38:23.479388  716742 system_pods.go:89] "snapshot-controller-56fcc65765-tcz8s" [16aa5513-c8b9-4e3b-9c63-2b9d9c64ef30] Running
	I0904 20:38:23.479392  716742 system_pods.go:89] "storage-provisioner" [12d1bdba-0302-4966-8175-e7542a9ae817] Running
	I0904 20:38:23.479403  716742 system_pods.go:126] duration metric: took 11.582438ms to wait for k8s-apps to be running ...
	I0904 20:38:23.479411  716742 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 20:38:23.479471  716742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 20:38:23.492157  716742 system_svc.go:56] duration metric: took 12.73694ms WaitForService to wait for kubelet
	I0904 20:38:23.492198  716742 kubeadm.go:582] duration metric: took 3m1.585223376s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:38:23.492221  716742 node_conditions.go:102] verifying NodePressure condition ...
	I0904 20:38:23.495727  716742 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0904 20:38:23.495758  716742 node_conditions.go:123] node cpu capacity is 2
	I0904 20:38:23.495769  716742 node_conditions.go:105] duration metric: took 3.542898ms to run NodePressure ...
	I0904 20:38:23.495782  716742 start.go:241] waiting for startup goroutines ...
	I0904 20:38:23.495790  716742 start.go:246] waiting for cluster config update ...
	I0904 20:38:23.495806  716742 start.go:255] writing updated cluster config ...
	I0904 20:38:23.496108  716742 ssh_runner.go:195] Run: rm -f paused
	I0904 20:38:23.838873  716742 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0904 20:38:23.842549  716742 out.go:177] * Done! kubectl is now configured to use "addons-057989" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 20:50:16 addons-057989 crio[964]: time="2024-09-04 20:50:16.051794551Z" level=info msg="Stopped pod sandbox: 3db751846b6144cb974070297151c2e167becee2cb48d9b274146cb23e1828ed" id=aed81f63-0929-437c-8009-c7e909bcc0b5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:50:16 addons-057989 crio[964]: time="2024-09-04 20:50:16.110800445Z" level=info msg="Removing container: 45aa268b85b7a6d015a8b279ce7a8bcb2a53c177cab7c36d9a7fc7b9965e4ad0" id=1b7df6a5-43fa-42d2-83aa-8ca0442f96f9 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 04 20:50:16 addons-057989 crio[964]: time="2024-09-04 20:50:16.125777091Z" level=info msg="Removed container 45aa268b85b7a6d015a8b279ce7a8bcb2a53c177cab7c36d9a7fc7b9965e4ad0: ingress-nginx/ingress-nginx-controller-bc57996ff-vqkrh/controller" id=1b7df6a5-43fa-42d2-83aa-8ca0442f96f9 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 04 20:50:16 addons-057989 crio[964]: time="2024-09-04 20:50:16.973642465Z" level=info msg="Removing container: e7757d7bb94ee94e4c4e1f7a44556747fad5b0512e8ae6cf8cac97c79506b43d" id=2c7adf8a-8ef3-43ca-9e0a-954e55056750 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 04 20:50:16 addons-057989 crio[964]: time="2024-09-04 20:50:16.990360096Z" level=info msg="Removed container e7757d7bb94ee94e4c4e1f7a44556747fad5b0512e8ae6cf8cac97c79506b43d: ingress-nginx/ingress-nginx-admission-create-6vqzd/create" id=2c7adf8a-8ef3-43ca-9e0a-954e55056750 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 04 20:50:16 addons-057989 crio[964]: time="2024-09-04 20:50:16.991684399Z" level=info msg="Removing container: bf713f4f5efd92c3e7805608d47d18d0af69dd4b3be7f450d252d5ea4c03f1a7" id=1499adaf-e9df-4eda-a4b7-6c52c6f06405 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.034618746Z" level=info msg="Removed container bf713f4f5efd92c3e7805608d47d18d0af69dd4b3be7f450d252d5ea4c03f1a7: ingress-nginx/ingress-nginx-admission-patch-kgzlw/patch" id=1499adaf-e9df-4eda-a4b7-6c52c6f06405 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.036465557Z" level=info msg="Stopping pod sandbox: 3db751846b6144cb974070297151c2e167becee2cb48d9b274146cb23e1828ed" id=4164048d-d855-418e-9455-4448b556319b name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.036508674Z" level=info msg="Stopped pod sandbox (already stopped): 3db751846b6144cb974070297151c2e167becee2cb48d9b274146cb23e1828ed" id=4164048d-d855-418e-9455-4448b556319b name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.037159827Z" level=info msg="Removing pod sandbox: 3db751846b6144cb974070297151c2e167becee2cb48d9b274146cb23e1828ed" id=01a3326d-bae0-4684-ad50-5105b16d6de1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.048166220Z" level=info msg="Removed pod sandbox: 3db751846b6144cb974070297151c2e167becee2cb48d9b274146cb23e1828ed" id=01a3326d-bae0-4684-ad50-5105b16d6de1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.048718028Z" level=info msg="Stopping pod sandbox: 335cdd6d925a53339ca710a327f1ce785bd156029f27c0f28060d4b7a29eba69" id=c82ff5a7-6211-45be-99f3-d1afbbbecfe1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.048759283Z" level=info msg="Stopped pod sandbox (already stopped): 335cdd6d925a53339ca710a327f1ce785bd156029f27c0f28060d4b7a29eba69" id=c82ff5a7-6211-45be-99f3-d1afbbbecfe1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.049129236Z" level=info msg="Removing pod sandbox: 335cdd6d925a53339ca710a327f1ce785bd156029f27c0f28060d4b7a29eba69" id=75b7ec87-fa41-43db-ab8a-0205b8d12904 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.059597277Z" level=info msg="Removed pod sandbox: 335cdd6d925a53339ca710a327f1ce785bd156029f27c0f28060d4b7a29eba69" id=75b7ec87-fa41-43db-ab8a-0205b8d12904 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.060218622Z" level=info msg="Stopping pod sandbox: f3644b533c86c494b93e6ad19fa380890790cbcf17d484a40168707400daa58c" id=38cb7480-7880-4938-91e1-74ed3d7464bf name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.060257029Z" level=info msg="Stopped pod sandbox (already stopped): f3644b533c86c494b93e6ad19fa380890790cbcf17d484a40168707400daa58c" id=38cb7480-7880-4938-91e1-74ed3d7464bf name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.060579549Z" level=info msg="Removing pod sandbox: f3644b533c86c494b93e6ad19fa380890790cbcf17d484a40168707400daa58c" id=33cefc36-01c8-49e5-994a-edebcc7f2bd2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.073077291Z" level=info msg="Removed pod sandbox: f3644b533c86c494b93e6ad19fa380890790cbcf17d484a40168707400daa58c" id=33cefc36-01c8-49e5-994a-edebcc7f2bd2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.073605796Z" level=info msg="Stopping pod sandbox: 57688671382bb1a03aab31fe1b4b1727f07dcdf569d4136ae155f20f538a1779" id=810fb55a-23b7-44fb-a921-bc2bac9bac01 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.073645623Z" level=info msg="Stopped pod sandbox (already stopped): 57688671382bb1a03aab31fe1b4b1727f07dcdf569d4136ae155f20f538a1779" id=810fb55a-23b7-44fb-a921-bc2bac9bac01 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.074125194Z" level=info msg="Removing pod sandbox: 57688671382bb1a03aab31fe1b4b1727f07dcdf569d4136ae155f20f538a1779" id=efc4cdf2-53ea-44dc-8439-a658c7c2de66 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:50:17 addons-057989 crio[964]: time="2024-09-04 20:50:17.090139316Z" level=info msg="Removed pod sandbox: 57688671382bb1a03aab31fe1b4b1727f07dcdf569d4136ae155f20f538a1779" id=efc4cdf2-53ea-44dc-8439-a658c7c2de66 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:50:19 addons-057989 crio[964]: time="2024-09-04 20:50:19.677087122Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=26454b9f-056d-4b27-87db-c9af3a8ab7ad name=/runtime.v1.ImageService/ImageStatus
	Sep 04 20:50:19 addons-057989 crio[964]: time="2024-09-04 20:50:19.677320635Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=26454b9f-056d-4b27-87db-c9af3a8ab7ad name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	f7279f29d273f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   8 seconds ago       Running             hello-world-app            0                   ba969b211e5a6       hello-world-app-55bf9c44b4-pdmkb
	6082ed4240ccb       docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6                         2 minutes ago       Running             nginx                      0                   00d8643d081a3       nginx
	17ccab4a15b48       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            13 minutes ago      Running             gcp-auth                   0                   c1f5e9bf12177       gcp-auth-89d5ffd79-cxk4z
	f2a85a34e5358       gcr.io/cloud-spanner-emulator/emulator@sha256:41ec188288c7943f488600462b2b74002814e52439be82d15de33c3ee4898a58          13 minutes ago      Running             cloud-spanner-emulator     0                   fbaf7f46fd8ef       cloud-spanner-emulator-769b77f747-l4dt7
	461dc54fabae7       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        13 minutes ago      Running             local-path-provisioner     0                   ea483f0b84bff       local-path-provisioner-86d989889c-kw8pv
	b6829e8e31f07       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   13 minutes ago      Running             metrics-server             0                   0eacdbbb6c587       metrics-server-84c5f94fbc-fq2ps
	31f827593e3fc       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                13 minutes ago      Running             nvidia-device-plugin-ctr   0                   159cecd95b4b7       nvidia-device-plugin-daemonset-hxn5k
	73fc2fc333315       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                         14 minutes ago      Running             yakd                       0                   0dd5920b12b1a       yakd-dashboard-67d98fc6b-7j8ss
	1020fa8b2d129       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        14 minutes ago      Running             storage-provisioner        0                   13409a452e461       storage-provisioner
	2da0c2547a33e       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        14 minutes ago      Running             coredns                    0                   bb705bdc5c322       coredns-6f6b679f8f-k9k5f
	508bb2db26ab2       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                      14 minutes ago      Running             kindnet-cni                0                   7d8b3855e8eb9       kindnet-xh95z
	13931a0aa1133       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                        14 minutes ago      Running             kube-proxy                 0                   f81a1c946ebc8       kube-proxy-nc7jl
	8926a3a460f5f       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                        15 minutes ago      Running             kube-apiserver             0                   092c186577491       kube-apiserver-addons-057989
	4b86be5e13ac3       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        15 minutes ago      Running             etcd                       0                   ee74187eabd31       etcd-addons-057989
	d659a50021dfa       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                        15 minutes ago      Running             kube-scheduler             0                   5bafee131ac20       kube-scheduler-addons-057989
	7276ded69a4bd       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                        15 minutes ago      Running             kube-controller-manager    0                   fa8a52afc7812       kube-controller-manager-addons-057989
	
	
	==> coredns [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b] <==
	[INFO] 10.244.0.4:43230 - 36266 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080696s
	[INFO] 10.244.0.4:54625 - 20507 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002721352s
	[INFO] 10.244.0.4:54625 - 17636 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002899991s
	[INFO] 10.244.0.4:47752 - 7177 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069036s
	[INFO] 10.244.0.4:47752 - 24629 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101462s
	[INFO] 10.244.0.4:45701 - 37302 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000101938s
	[INFO] 10.244.0.4:45701 - 44725 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000049492s
	[INFO] 10.244.0.4:41451 - 17255 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000054743s
	[INFO] 10.244.0.4:41451 - 52577 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004708s
	[INFO] 10.244.0.4:58781 - 44362 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045061s
	[INFO] 10.244.0.4:58781 - 5196 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000714s
	[INFO] 10.244.0.4:33457 - 46149 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001594296s
	[INFO] 10.244.0.4:33457 - 859 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001866158s
	[INFO] 10.244.0.4:53802 - 33736 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000056253s
	[INFO] 10.244.0.4:53802 - 30774 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077364s
	[INFO] 10.244.0.20:37769 - 193 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000226473s
	[INFO] 10.244.0.20:34803 - 4301 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000135128s
	[INFO] 10.244.0.20:58659 - 7520 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152554s
	[INFO] 10.244.0.20:36650 - 49243 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000080572s
	[INFO] 10.244.0.20:38727 - 6956 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000194687s
	[INFO] 10.244.0.20:48234 - 885 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104121s
	[INFO] 10.244.0.20:43383 - 57780 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002078142s
	[INFO] 10.244.0.20:59508 - 59382 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002994471s
	[INFO] 10.244.0.20:37033 - 24816 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002191148s
	[INFO] 10.244.0.20:33558 - 37651 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.0021591s
	
	
	==> describe nodes <==
	Name:               addons-057989
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-057989
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af
	                    minikube.k8s.io/name=addons-057989
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_04T20_35_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-057989
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Sep 2024 20:35:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-057989
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Sep 2024 20:50:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Sep 2024 20:48:22 +0000   Wed, 04 Sep 2024 20:35:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Sep 2024 20:48:22 +0000   Wed, 04 Sep 2024 20:35:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Sep 2024 20:48:22 +0000   Wed, 04 Sep 2024 20:35:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Sep 2024 20:48:22 +0000   Wed, 04 Sep 2024 20:36:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-057989
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 21d608e1e5814ff9b34c3cb1cfdf5bda
	  System UUID:                19e6588e-4dc5-4438-9acf-c7fa25e5848f
	  Boot ID:                    02fc5889-82d8-42f6-b649-9c13bcf74bdb
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     cloud-spanner-emulator-769b77f747-l4dt7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-pdmkb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gcp-auth                    gcp-auth-89d5ffd79-cxk4z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-6f6b679f8f-k9k5f                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-addons-057989                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-xh95z                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-057989               250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-057989      200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-nc7jl                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-057989               100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-fq2ps            100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         14m
	  kube-system                 nvidia-device-plugin-daemonset-hxn5k       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-kw8pv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-7j8ss             0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             548Mi (6%)  476Mi (6%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-057989 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-057989 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-057989 status is now: NodeHasSufficientPID
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  15m                kubelet          Node addons-057989 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                kubelet          Node addons-057989 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m                kubelet          Node addons-057989 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node addons-057989 event: Registered Node addons-057989 in Controller
	  Normal   NodeReady                14m                kubelet          Node addons-057989 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 4 20:07] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[Sep 4 20:31] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971] <==
	{"level":"warn","ts":"2024-09-04T20:35:25.581489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.959743ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.687455Z","caller":"traceutil/trace.go:171","msg":"trace[1976798533] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:388; }","duration":"253.908912ms","start":"2024-09-04T20:35:25.433511Z","end":"2024-09-04T20:35:25.687420Z","steps":["trace[1976798533] 'range keys from in-memory index tree'  (duration: 147.878228ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-04T20:35:25.691719Z","caller":"traceutil/trace.go:171","msg":"trace[369956328] linearizableReadLoop","detail":"{readStateIndex:398; appliedIndex:398; }","duration":"110.347393ms","start":"2024-09-04T20:35:25.581351Z","end":"2024-09-04T20:35:25.691698Z","steps":["trace[369956328] 'read index received'  (duration: 110.342412ms)","trace[369956328] 'applied index is now lower than readState.Index'  (duration: 3.881µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:25.694290Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.921597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.707065Z","caller":"traceutil/trace.go:171","msg":"trace[1332958654] range","detail":"{range_begin:/registry/clusterroles/minikube-ingress-dns; range_end:; response_count:0; response_revision:388; }","duration":"125.695984ms","start":"2024-09-04T20:35:25.581345Z","end":"2024-09-04T20:35:25.707041Z","steps":["trace[1332958654] 'agreement among raft nodes before linearized reading'  (duration: 112.43803ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T20:35:25.884853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.044659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.885401Z","caller":"traceutil/trace.go:171","msg":"trace[1259821602] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:0; response_revision:397; }","duration":"130.606ms","start":"2024-09-04T20:35:25.754776Z","end":"2024-09-04T20:35:25.885382Z","steps":["trace[1259821602] 'agreement among raft nodes before linearized reading'  (duration: 53.794706ms)","trace[1259821602] 'range keys from in-memory index tree'  (duration: 76.238145ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:25.885753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.982311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.888311Z","caller":"traceutil/trace.go:171","msg":"trace[778996107] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:0; response_revision:397; }","duration":"133.536021ms","start":"2024-09-04T20:35:25.754756Z","end":"2024-09-04T20:35:25.888292Z","steps":["trace[778996107] 'agreement among raft nodes before linearized reading'  (duration: 53.824646ms)","trace[778996107] 'range keys from in-memory index tree'  (duration: 77.148805ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:25.885790Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.04526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.895290Z","caller":"traceutil/trace.go:171","msg":"trace[850430809] range","detail":"{range_begin:/registry/services/specs/kube-system/registry; range_end:; response_count:0; response_revision:397; }","duration":"140.533663ms","start":"2024-09-04T20:35:25.754733Z","end":"2024-09-04T20:35:25.895267Z","steps":["trace[850430809] 'agreement among raft nodes before linearized reading'  (duration: 53.853651ms)","trace[850430809] 'range keys from in-memory index tree'  (duration: 77.186547ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:26.954061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.583009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3455"}
	{"level":"info","ts":"2024-09-04T20:35:26.954130Z","caller":"traceutil/trace.go:171","msg":"trace[1772008450] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:444; }","duration":"167.663204ms","start":"2024-09-04T20:35:26.786452Z","end":"2024-09-04T20:35:26.954115Z","steps":["trace[1772008450] 'agreement among raft nodes before linearized reading'  (duration: 111.077805ms)","trace[1772008450] 'range keys from in-memory index tree'  (duration: 56.418272ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:26.961722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.697616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/yakd-dashboard/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:26.961805Z","caller":"traceutil/trace.go:171","msg":"trace[2102871173] range","detail":"{range_begin:/registry/serviceaccounts/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:444; }","duration":"175.792769ms","start":"2024-09-04T20:35:26.785998Z","end":"2024-09-04T20:35:26.961790Z","steps":["trace[2102871173] 'agreement among raft nodes before linearized reading'  (duration: 111.129365ms)","trace[2102871173] 'range keys from in-memory index tree'  (duration: 64.517372ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:26.962247Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.843828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4096"}
	{"level":"info","ts":"2024-09-04T20:35:26.962291Z","caller":"traceutil/trace.go:171","msg":"trace[2095094196] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:444; }","duration":"175.943485ms","start":"2024-09-04T20:35:26.786339Z","end":"2024-09-04T20:35:26.962282Z","steps":["trace[2095094196] 'agreement among raft nodes before linearized reading'  (duration: 111.201643ms)","trace[2095094196] 'range keys from in-memory index tree'  (duration: 64.583528ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:26.962505Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.470957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/gadget/gadget\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:26.962550Z","caller":"traceutil/trace.go:171","msg":"trace[353857431] range","detail":"{range_begin:/registry/serviceaccounts/gadget/gadget; range_end:; response_count:0; response_revision:444; }","duration":"176.513795ms","start":"2024-09-04T20:35:26.786026Z","end":"2024-09-04T20:35:26.962539Z","steps":["trace[353857431] 'agreement among raft nodes before linearized reading'  (duration: 111.527379ms)","trace[353857431] 'range keys from in-memory index tree'  (duration: 64.934536ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-04T20:45:12.338154Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1560}
	{"level":"info","ts":"2024-09-04T20:45:12.368220Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1560,"took":"29.612622ms","hash":3263472688,"current-db-size-bytes":6590464,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3416064,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-04T20:45:12.368273Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3263472688,"revision":1560,"compact-revision":-1}
	{"level":"info","ts":"2024-09-04T20:50:12.347485Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1979}
	{"level":"info","ts":"2024-09-04T20:50:12.372012Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1979,"took":"23.716274ms","hash":3109541245,"current-db-size-bytes":6590464,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3985408,"current-db-size-in-use":"4.0 MB"}
	{"level":"info","ts":"2024-09-04T20:50:12.372473Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3109541245,"revision":1979,"compact-revision":1560}
	
	
	==> gcp-auth [17ccab4a15b4833c23b0926aecd04a59266538dd48181e9afa4051fa2ef4c952] <==
	2024/09/04 20:37:19 GCP Auth Webhook started!
	2024/09/04 20:38:23 Ready to marshal response ...
	2024/09/04 20:38:23 Ready to write response ...
	2024/09/04 20:38:24 Ready to marshal response ...
	2024/09/04 20:38:24 Ready to write response ...
	2024/09/04 20:38:24 Ready to marshal response ...
	2024/09/04 20:38:24 Ready to write response ...
	2024/09/04 20:46:32 Ready to marshal response ...
	2024/09/04 20:46:32 Ready to write response ...
	2024/09/04 20:46:37 Ready to marshal response ...
	2024/09/04 20:46:37 Ready to write response ...
	2024/09/04 20:47:04 Ready to marshal response ...
	2024/09/04 20:47:04 Ready to write response ...
	2024/09/04 20:47:52 Ready to marshal response ...
	2024/09/04 20:47:52 Ready to write response ...
	2024/09/04 20:50:10 Ready to marshal response ...
	2024/09/04 20:50:10 Ready to write response ...
	
	
	==> kernel <==
	 20:50:21 up  4:32,  0 users,  load average: 0.29, 0.40, 1.16
	Linux addons-057989 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706] <==
	I0904 20:48:18.287084       1 main.go:299] handling current node
	I0904 20:48:28.286964       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:48:28.287302       1 main.go:299] handling current node
	I0904 20:48:38.287050       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:48:38.287184       1 main.go:299] handling current node
	I0904 20:48:48.287574       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:48:48.287610       1 main.go:299] handling current node
	I0904 20:48:58.286475       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:48:58.286518       1 main.go:299] handling current node
	I0904 20:49:08.286488       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:49:08.286527       1 main.go:299] handling current node
	I0904 20:49:18.286582       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:49:18.286707       1 main.go:299] handling current node
	I0904 20:49:28.287499       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:49:28.287540       1 main.go:299] handling current node
	I0904 20:49:38.293741       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:49:38.293885       1 main.go:299] handling current node
	I0904 20:49:48.288591       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:49:48.288653       1 main.go:299] handling current node
	I0904 20:49:58.286483       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:49:58.286530       1 main.go:299] handling current node
	I0904 20:50:08.294249       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:50:08.294290       1 main.go:299] handling current node
	I0904 20:50:18.287288       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:50:18.287331       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0904 20:37:49.387183       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.223.191:443: connect: connection refused" logger="UnhandledError"
	E0904 20:37:49.390494       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.223.191:443: connect: connection refused" logger="UnhandledError"
	E0904 20:37:49.395595       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.223.191:443: connect: connection refused" logger="UnhandledError"
	I0904 20:37:49.490854       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0904 20:46:44.207537       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0904 20:47:20.716830       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.716996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:47:20.747162       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.747299       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:47:20.757218       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.757301       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:47:20.799045       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.799887       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:47:20.853654       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.853691       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0904 20:47:21.799451       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0904 20:47:21.854331       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0904 20:47:21.948281       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0904 20:47:47.023165       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0904 20:47:48.067539       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0904 20:47:52.642386       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0904 20:47:52.941344       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.35.92"}
	I0904 20:50:10.969869       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.199.63"}
	
	
	==> kube-controller-manager [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743] <==
	W0904 20:48:52.207208       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:48:52.207254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:49:10.407645       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:49:10.407695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:49:10.950178       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:49:10.950222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:49:26.750845       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:49:26.750896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:49:34.243686       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:49:34.243737       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:49:42.096267       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:49:42.096405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:50:03.512064       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:50:03.512109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0904 20:50:10.712389       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="53.560292ms"
	I0904 20:50:10.726188       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.668908ms"
	I0904 20:50:10.727180       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="142.692µs"
	I0904 20:50:10.727522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.446µs"
	W0904 20:50:12.610021       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:50:12.610066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0904 20:50:12.847321       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="8.746µs"
	I0904 20:50:12.847943       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0904 20:50:12.859648       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0904 20:50:13.123323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.915298ms"
	I0904 20:50:13.123497       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="64.975µs"
	
	
	==> kube-proxy [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5] <==
	I0904 20:35:27.820676       1 server_linux.go:66] "Using iptables proxy"
	I0904 20:35:28.601961       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0904 20:35:28.602048       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 20:35:28.838061       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 20:35:28.838226       1 server_linux.go:169] "Using iptables Proxier"
	I0904 20:35:28.840297       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 20:35:28.840905       1 server.go:483] "Version info" version="v1.31.0"
	I0904 20:35:28.840973       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 20:35:28.843573       1 config.go:197] "Starting service config controller"
	I0904 20:35:28.843687       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0904 20:35:28.881197       1 config.go:104] "Starting endpoint slice config controller"
	I0904 20:35:28.881320       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0904 20:35:28.883028       1 config.go:326] "Starting node config controller"
	I0904 20:35:28.883115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0904 20:35:28.997291       1 shared_informer.go:320] Caches are synced for node config
	I0904 20:35:29.021198       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0904 20:35:29.044954       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608] <==
	W0904 20:35:14.275096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0904 20:35:14.275232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:14.275216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0904 20:35:14.275327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.102647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0904 20:35:15.102790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.146211       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0904 20:35:15.146262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.151331       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0904 20:35:15.151486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.194181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0904 20:35:15.194327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.218850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0904 20:35:15.218973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.252691       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0904 20:35:15.252825       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0904 20:35:15.348686       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0904 20:35:15.348826       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.376639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0904 20:35:15.376765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.392542       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0904 20:35:15.392666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.425197       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0904 20:35:15.425318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0904 20:35:17.567935       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 04 20:50:12 addons-057989 kubelet[1510]: I0904 20:50:12.085265    1510 scope.go:117] "RemoveContainer" containerID="e4e68a0857dfde39a247d419f245fe3781a35d12d42481fc2555243e179334fc"
	Sep 04 20:50:12 addons-057989 kubelet[1510]: I0904 20:50:12.137935    1510 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k8vs2\" (UniqueName: \"kubernetes.io/projected/66349fc9-7ad4-480d-b82b-7fb460b850a2-kube-api-access-k8vs2\") on node \"addons-057989\" DevicePath \"\""
	Sep 04 20:50:12 addons-057989 kubelet[1510]: I0904 20:50:12.187925    1510 scope.go:117] "RemoveContainer" containerID="e4e68a0857dfde39a247d419f245fe3781a35d12d42481fc2555243e179334fc"
	Sep 04 20:50:12 addons-057989 kubelet[1510]: E0904 20:50:12.188674    1510 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e68a0857dfde39a247d419f245fe3781a35d12d42481fc2555243e179334fc\": container with ID starting with e4e68a0857dfde39a247d419f245fe3781a35d12d42481fc2555243e179334fc not found: ID does not exist" containerID="e4e68a0857dfde39a247d419f245fe3781a35d12d42481fc2555243e179334fc"
	Sep 04 20:50:12 addons-057989 kubelet[1510]: I0904 20:50:12.188715    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e68a0857dfde39a247d419f245fe3781a35d12d42481fc2555243e179334fc"} err="failed to get container status \"e4e68a0857dfde39a247d419f245fe3781a35d12d42481fc2555243e179334fc\": rpc error: code = NotFound desc = could not find container \"e4e68a0857dfde39a247d419f245fe3781a35d12d42481fc2555243e179334fc\": container with ID starting with e4e68a0857dfde39a247d419f245fe3781a35d12d42481fc2555243e179334fc not found: ID does not exist"
	Sep 04 20:50:12 addons-057989 kubelet[1510]: I0904 20:50:12.676934    1510 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66349fc9-7ad4-480d-b82b-7fb460b850a2" path="/var/lib/kubelet/pods/66349fc9-7ad4-480d-b82b-7fb460b850a2/volumes"
	Sep 04 20:50:14 addons-057989 kubelet[1510]: I0904 20:50:14.677323    1510 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2defa102-a064-41fc-b951-9291980bf3f9" path="/var/lib/kubelet/pods/2defa102-a064-41fc-b951-9291980bf3f9/volumes"
	Sep 04 20:50:14 addons-057989 kubelet[1510]: I0904 20:50:14.678332    1510 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1ea8ac0-1baf-4d93-9cb5-96ac2bc3d363" path="/var/lib/kubelet/pods/f1ea8ac0-1baf-4d93-9cb5-96ac2bc3d363/volumes"
	Sep 04 20:50:16 addons-057989 kubelet[1510]: I0904 20:50:16.109098    1510 scope.go:117] "RemoveContainer" containerID="45aa268b85b7a6d015a8b279ce7a8bcb2a53c177cab7c36d9a7fc7b9965e4ad0"
	Sep 04 20:50:16 addons-057989 kubelet[1510]: I0904 20:50:16.126064    1510 scope.go:117] "RemoveContainer" containerID="45aa268b85b7a6d015a8b279ce7a8bcb2a53c177cab7c36d9a7fc7b9965e4ad0"
	Sep 04 20:50:16 addons-057989 kubelet[1510]: E0904 20:50:16.126469    1510 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45aa268b85b7a6d015a8b279ce7a8bcb2a53c177cab7c36d9a7fc7b9965e4ad0\": container with ID starting with 45aa268b85b7a6d015a8b279ce7a8bcb2a53c177cab7c36d9a7fc7b9965e4ad0 not found: ID does not exist" containerID="45aa268b85b7a6d015a8b279ce7a8bcb2a53c177cab7c36d9a7fc7b9965e4ad0"
	Sep 04 20:50:16 addons-057989 kubelet[1510]: I0904 20:50:16.126513    1510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45aa268b85b7a6d015a8b279ce7a8bcb2a53c177cab7c36d9a7fc7b9965e4ad0"} err="failed to get container status \"45aa268b85b7a6d015a8b279ce7a8bcb2a53c177cab7c36d9a7fc7b9965e4ad0\": rpc error: code = NotFound desc = could not find container \"45aa268b85b7a6d015a8b279ce7a8bcb2a53c177cab7c36d9a7fc7b9965e4ad0\": container with ID starting with 45aa268b85b7a6d015a8b279ce7a8bcb2a53c177cab7c36d9a7fc7b9965e4ad0 not found: ID does not exist"
	Sep 04 20:50:16 addons-057989 kubelet[1510]: I0904 20:50:16.163668    1510 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/acf9942a-b013-45ff-8421-2e697ba3f39b-webhook-cert\") pod \"acf9942a-b013-45ff-8421-2e697ba3f39b\" (UID: \"acf9942a-b013-45ff-8421-2e697ba3f39b\") "
	Sep 04 20:50:16 addons-057989 kubelet[1510]: I0904 20:50:16.163768    1510 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr757\" (UniqueName: \"kubernetes.io/projected/acf9942a-b013-45ff-8421-2e697ba3f39b-kube-api-access-qr757\") pod \"acf9942a-b013-45ff-8421-2e697ba3f39b\" (UID: \"acf9942a-b013-45ff-8421-2e697ba3f39b\") "
	Sep 04 20:50:16 addons-057989 kubelet[1510]: I0904 20:50:16.167098    1510 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf9942a-b013-45ff-8421-2e697ba3f39b-kube-api-access-qr757" (OuterVolumeSpecName: "kube-api-access-qr757") pod "acf9942a-b013-45ff-8421-2e697ba3f39b" (UID: "acf9942a-b013-45ff-8421-2e697ba3f39b"). InnerVolumeSpecName "kube-api-access-qr757". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 04 20:50:16 addons-057989 kubelet[1510]: I0904 20:50:16.169695    1510 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf9942a-b013-45ff-8421-2e697ba3f39b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "acf9942a-b013-45ff-8421-2e697ba3f39b" (UID: "acf9942a-b013-45ff-8421-2e697ba3f39b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 04 20:50:16 addons-057989 kubelet[1510]: I0904 20:50:16.264086    1510 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/acf9942a-b013-45ff-8421-2e697ba3f39b-webhook-cert\") on node \"addons-057989\" DevicePath \"\""
	Sep 04 20:50:16 addons-057989 kubelet[1510]: I0904 20:50:16.264144    1510 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qr757\" (UniqueName: \"kubernetes.io/projected/acf9942a-b013-45ff-8421-2e697ba3f39b-kube-api-access-qr757\") on node \"addons-057989\" DevicePath \"\""
	Sep 04 20:50:16 addons-057989 kubelet[1510]: I0904 20:50:16.677336    1510 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf9942a-b013-45ff-8421-2e697ba3f39b" path="/var/lib/kubelet/pods/acf9942a-b013-45ff-8421-2e697ba3f39b/volumes"
	Sep 04 20:50:16 addons-057989 kubelet[1510]: E0904 20:50:16.742625    1510 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320, memory: /docker/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320/system.slice/kubelet.service"
	Sep 04 20:50:16 addons-057989 kubelet[1510]: I0904 20:50:16.972437    1510 scope.go:117] "RemoveContainer" containerID="e7757d7bb94ee94e4c4e1f7a44556747fad5b0512e8ae6cf8cac97c79506b43d"
	Sep 04 20:50:16 addons-057989 kubelet[1510]: I0904 20:50:16.990623    1510 scope.go:117] "RemoveContainer" containerID="bf713f4f5efd92c3e7805608d47d18d0af69dd4b3be7f450d252d5ea4c03f1a7"
	Sep 04 20:50:17 addons-057989 kubelet[1510]: E0904 20:50:17.093868    1510 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483017093621661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:539010,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:50:17 addons-057989 kubelet[1510]: E0904 20:50:17.093916    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483017093621661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:539010,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:50:19 addons-057989 kubelet[1510]: E0904 20:50:19.677556    1510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c44deaee-b000-42d4-a04d-514ae8c98a8a"
	
	
	==> storage-provisioner [1020fa8b2d129b2c1528e8263e44e0614430ad1edde0adfc959a0b0cead5e677] <==
	I0904 20:36:09.670277       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 20:36:09.684669       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 20:36:09.684712       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0904 20:36:09.692410       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0904 20:36:09.692859       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-057989_59879aa7-61cd-4c49-a7f4-85b770d0ea1d!
	I0904 20:36:09.694887       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"311a2453-e39f-4619-9aa1-2dcff1946c80", APIVersion:"v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-057989_59879aa7-61cd-4c49-a7f4-85b770d0ea1d became leader
	I0904 20:36:09.793462       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-057989_59879aa7-61cd-4c49-a7f4-85b770d0ea1d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-057989 -n addons-057989
helpers_test.go:261: (dbg) Run:  kubectl --context addons-057989 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-057989 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-057989 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-057989/192.168.49.2
	Start Time:       Wed, 04 Sep 2024 20:38:24 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k4dt6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k4dt6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-057989
	  Normal   Pulling    10m (x4 over 11m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)    kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 11m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    108s (x43 over 11m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (150.45s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (363.14s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.202436ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-fq2ps" [42462678-f110-4415-b2f1-367217f8c8a2] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006603647s
addons_test.go:417: (dbg) Run:  kubectl --context addons-057989 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-057989 top pods -n kube-system: exit status 1 (94.673139ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-k9k5f, age: 12m5.252654828s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-057989 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-057989 top pods -n kube-system: exit status 1 (86.289221ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-k9k5f, age: 12m8.885343105s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-057989 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-057989 top pods -n kube-system: exit status 1 (89.944272ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-k9k5f, age: 12m11.709520856s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-057989 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-057989 top pods -n kube-system: exit status 1 (111.674176ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-k9k5f, age: 12m19.354075456s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-057989 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-057989 top pods -n kube-system: exit status 1 (102.015997ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-k9k5f, age: 12m25.561302761s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-057989 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-057989 top pods -n kube-system: exit status 1 (91.09971ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-k9k5f, age: 12m46.30324081s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-057989 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-057989 top pods -n kube-system: exit status 1 (96.086532ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-k9k5f, age: 13m7.571525311s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-057989 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-057989 top pods -n kube-system: exit status 1 (87.673684ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-k9k5f, age: 13m50.446978448s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-057989 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-057989 top pods -n kube-system: exit status 1 (124.633061ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-k9k5f, age: 15m4.188638528s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-057989 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-057989 top pods -n kube-system: exit status 1 (94.544459ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-k9k5f, age: 15m49.021312794s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-057989 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-057989 top pods -n kube-system: exit status 1 (97.315599ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-k9k5f, age: 17m9.865229259s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-057989 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-057989 top pods -n kube-system: exit status 1 (99.516447ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-k9k5f, age: 17m59.031561847s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-057989
helpers_test.go:235: (dbg) docker inspect addons-057989:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320",
	        "Created": "2024-09-04T20:34:52.925359137Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 717234,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-04T20:34:53.081030632Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8411aacd61cb8f2a7ae48c92e2c9e76ad4dd701b3dba8b30393c5cc31fbd2b15",
	        "ResolvConfPath": "/var/lib/docker/containers/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320/hostname",
	        "HostsPath": "/var/lib/docker/containers/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320/hosts",
	        "LogPath": "/var/lib/docker/containers/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320/73a9bf2262994229ceabe3ebb243d230aa1ca5a35d1c8a05f96d539cc680a320-json.log",
	        "Name": "/addons-057989",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-057989:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-057989",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cd850bee504de257b138e2fb3f5056955979d35717e7f20016e6d26d978366be-init/diff:/var/lib/docker/overlay2/e164f50a1bfe4541271ed61a6ed23c33b9aae141da805b23620713759476fde0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd850bee504de257b138e2fb3f5056955979d35717e7f20016e6d26d978366be/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd850bee504de257b138e2fb3f5056955979d35717e7f20016e6d26d978366be/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd850bee504de257b138e2fb3f5056955979d35717e7f20016e6d26d978366be/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-057989",
	                "Source": "/var/lib/docker/volumes/addons-057989/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-057989",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-057989",
	                "name.minikube.sigs.k8s.io": "addons-057989",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7817bba0f58403312356fc6c068a6420e3474d973c3bf9f9708656d8c06482b",
	            "SandboxKey": "/var/run/docker/netns/e7817bba0f58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-057989": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3dc03972bd677b6f27e0f7eb6bf3c869f01a326f25eec49d8a8d16973aa42236",
	                    "EndpointID": "3f89f8bd76b0eaeb20e7ece98c8b5534a50c35ccfbd1872e98138f979cab06b1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-057989",
	                        "73a9bf226299"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-057989 -n addons-057989
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-057989 logs -n 25: (1.50619037s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-110365                                                                     | download-only-110365   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| start   | --download-only -p                                                                          | download-docker-053885 | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | download-docker-053885                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-053885                                                                   | download-docker-053885 | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-435820   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | binary-mirror-435820                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40553                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-435820                                                                     | binary-mirror-435820   | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| addons  | enable dashboard -p                                                                         | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | addons-057989                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | addons-057989                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-057989 --wait=true                                                                | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:38 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-057989 addons                                                                        | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-057989 addons                                                                        | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-057989 ip                                                                            | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	| addons  | addons-057989 addons disable                                                                | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:47 UTC | 04 Sep 24 20:47 UTC |
	|         | addons-057989                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-057989 ssh curl -s                                                                   | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:48 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-057989 ip                                                                            | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:50 UTC | 04 Sep 24 20:50 UTC |
	| addons  | addons-057989 addons disable                                                                | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:50 UTC | 04 Sep 24 20:50 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-057989 addons disable                                                                | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:50 UTC | 04 Sep 24 20:50 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ssh     | addons-057989 ssh cat                                                                       | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:50 UTC | 04 Sep 24 20:50 UTC |
	|         | /opt/local-path-provisioner/pvc-40c44ff0-5aea-4b87-80a2-1fb89aeac81e_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-057989 addons disable                                                                | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:50 UTC | 04 Sep 24 20:51 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-057989 addons disable                                                                | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:51 UTC | 04 Sep 24 20:51 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:51 UTC | 04 Sep 24 20:51 UTC |
	|         | -p addons-057989                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:51 UTC | 04 Sep 24 20:51 UTC |
	|         | addons-057989                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:51 UTC | 04 Sep 24 20:51 UTC |
	|         | -p addons-057989                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-057989 addons disable                                                                | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:51 UTC | 04 Sep 24 20:51 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-057989 addons                                                                        | addons-057989          | jenkins | v1.34.0 | 04 Sep 24 20:53 UTC | 04 Sep 24 20:53 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 20:34:26
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 20:34:26.364635  716742 out.go:345] Setting OutFile to fd 1 ...
	I0904 20:34:26.364772  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:34:26.364783  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:34:26.364788  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:34:26.365015  716742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
	I0904 20:34:26.365465  716742 out.go:352] Setting JSON to false
	I0904 20:34:26.366331  716742 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15417,"bootTime":1725466650,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0904 20:34:26.366402  716742 start.go:139] virtualization:  
	I0904 20:34:26.368474  716742 out.go:177] * [addons-057989] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0904 20:34:26.370910  716742 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 20:34:26.371038  716742 notify.go:220] Checking for updates...
	I0904 20:34:26.374838  716742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:34:26.376708  716742 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 20:34:26.378539  716742 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	I0904 20:34:26.380267  716742 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0904 20:34:26.382309  716742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 20:34:26.384843  716742 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 20:34:26.407170  716742 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0904 20:34:26.407296  716742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:34:26.468251  716742 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-04 20:34:26.458330655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 20:34:26.468366  716742 docker.go:307] overlay module found
	I0904 20:34:26.470430  716742 out.go:177] * Using the docker driver based on user configuration
	I0904 20:34:26.472360  716742 start.go:297] selected driver: docker
	I0904 20:34:26.472375  716742 start.go:901] validating driver "docker" against <nil>
	I0904 20:34:26.472388  716742 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 20:34:26.473037  716742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:34:26.545537  716742 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-04 20:34:26.534442525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 20:34:26.545707  716742 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 20:34:26.546029  716742 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:34:26.547715  716742 out.go:177] * Using Docker driver with root privileges
	I0904 20:34:26.549443  716742 cni.go:84] Creating CNI manager for ""
	I0904 20:34:26.549486  716742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:34:26.549498  716742 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 20:34:26.549644  716742 start.go:340] cluster config:
	{Name:addons-057989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0904 20:34:26.551789  716742 out.go:177] * Starting "addons-057989" primary control-plane node in "addons-057989" cluster
	I0904 20:34:26.553334  716742 cache.go:121] Beginning downloading kic base image for docker with crio
	I0904 20:34:26.554936  716742 out.go:177] * Pulling base image v0.0.45 ...
	I0904 20:34:26.556687  716742 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 20:34:26.556763  716742 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0904 20:34:26.556783  716742 cache.go:56] Caching tarball of preloaded images
	I0904 20:34:26.556881  716742 preload.go:172] Found /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0904 20:34:26.556903  716742 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0904 20:34:26.557407  716742 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/config.json ...
	I0904 20:34:26.557448  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/config.json: {Name:mk4c159eebe676425fef59d6562583fda185ed7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:26.557673  716742 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0904 20:34:26.576982  716742 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0904 20:34:26.577098  716742 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0904 20:34:26.577129  716742 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0904 20:34:26.577146  716742 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0904 20:34:26.577161  716742 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0904 20:34:26.577168  716742 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from local cache
	I0904 20:34:44.336400  716742 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from cached tarball
	I0904 20:34:44.336436  716742 cache.go:194] Successfully downloaded all kic artifacts
	I0904 20:34:44.336481  716742 start.go:360] acquireMachinesLock for addons-057989: {Name:mk0970b3a3d59ebd1c006a89f39ceb89ec07a595 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 20:34:44.337080  716742 start.go:364] duration metric: took 571.787µs to acquireMachinesLock for "addons-057989"
	I0904 20:34:44.337123  716742 start.go:93] Provisioning new machine with config: &{Name:addons-057989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:34:44.337224  716742 start.go:125] createHost starting for "" (driver="docker")
	I0904 20:34:44.340059  716742 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0904 20:34:44.340312  716742 start.go:159] libmachine.API.Create for "addons-057989" (driver="docker")
	I0904 20:34:44.340356  716742 client.go:168] LocalClient.Create starting
	I0904 20:34:44.340489  716742 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem
	I0904 20:34:45.869727  716742 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem
	I0904 20:34:46.857527  716742 cli_runner.go:164] Run: docker network inspect addons-057989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 20:34:46.873370  716742 cli_runner.go:211] docker network inspect addons-057989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 20:34:46.873460  716742 network_create.go:284] running [docker network inspect addons-057989] to gather additional debugging logs...
	I0904 20:34:46.873484  716742 cli_runner.go:164] Run: docker network inspect addons-057989
	W0904 20:34:46.888882  716742 cli_runner.go:211] docker network inspect addons-057989 returned with exit code 1
	I0904 20:34:46.888915  716742 network_create.go:287] error running [docker network inspect addons-057989]: docker network inspect addons-057989: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-057989 not found
	I0904 20:34:46.888929  716742 network_create.go:289] output of [docker network inspect addons-057989]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-057989 not found
	
	** /stderr **
	I0904 20:34:46.889032  716742 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 20:34:46.906072  716742 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017d4850}
	I0904 20:34:46.906116  716742 network_create.go:124] attempt to create docker network addons-057989 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0904 20:34:46.906182  716742 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-057989 addons-057989
	I0904 20:34:46.976444  716742 network_create.go:108] docker network addons-057989 192.168.49.0/24 created
	I0904 20:34:46.976479  716742 kic.go:121] calculated static IP "192.168.49.2" for the "addons-057989" container
	I0904 20:34:46.976555  716742 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 20:34:46.992316  716742 cli_runner.go:164] Run: docker volume create addons-057989 --label name.minikube.sigs.k8s.io=addons-057989 --label created_by.minikube.sigs.k8s.io=true
	I0904 20:34:47.012952  716742 oci.go:103] Successfully created a docker volume addons-057989
	I0904 20:34:47.013072  716742 cli_runner.go:164] Run: docker run --rm --name addons-057989-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-057989 --entrypoint /usr/bin/test -v addons-057989:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib
	I0904 20:34:48.615772  716742 cli_runner.go:217] Completed: docker run --rm --name addons-057989-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-057989 --entrypoint /usr/bin/test -v addons-057989:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib: (1.602654612s)
	I0904 20:34:48.615806  716742 oci.go:107] Successfully prepared a docker volume addons-057989
	I0904 20:34:48.615827  716742 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 20:34:48.615846  716742 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 20:34:48.615942  716742 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-057989:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir
	I0904 20:34:52.860186  716742 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-057989:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir: (4.244202783s)
	I0904 20:34:52.860217  716742 kic.go:203] duration metric: took 4.244368465s to extract preloaded images to volume ...
	W0904 20:34:52.860378  716742 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 20:34:52.860496  716742 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 20:34:52.910765  716742 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-057989 --name addons-057989 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-057989 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-057989 --network addons-057989 --ip 192.168.49.2 --volume addons-057989:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85
	I0904 20:34:53.252893  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Running}}
	I0904 20:34:53.272312  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:34:53.296422  716742 cli_runner.go:164] Run: docker exec addons-057989 stat /var/lib/dpkg/alternatives/iptables
	I0904 20:34:53.389330  716742 oci.go:144] the created container "addons-057989" has a running status.
	I0904 20:34:53.389362  716742 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa...
	I0904 20:34:54.130907  716742 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 20:34:54.153369  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:34:54.171584  716742 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 20:34:54.171605  716742 kic_runner.go:114] Args: [docker exec --privileged addons-057989 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 20:34:54.258829  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:34:54.279547  716742 machine.go:93] provisionDockerMachine start ...
	I0904 20:34:54.279655  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:54.304920  716742 main.go:141] libmachine: Using SSH client type: native
	I0904 20:34:54.305248  716742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0904 20:34:54.305259  716742 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 20:34:54.430287  716742 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-057989
	
	I0904 20:34:54.430311  716742 ubuntu.go:169] provisioning hostname "addons-057989"
	I0904 20:34:54.430389  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:54.451013  716742 main.go:141] libmachine: Using SSH client type: native
	I0904 20:34:54.451268  716742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0904 20:34:54.451286  716742 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-057989 && echo "addons-057989" | sudo tee /etc/hostname
	I0904 20:34:54.595269  716742 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-057989
	
	I0904 20:34:54.595355  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:54.613079  716742 main.go:141] libmachine: Using SSH client type: native
	I0904 20:34:54.613362  716742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0904 20:34:54.613384  716742 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-057989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-057989/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-057989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 20:34:54.733925  716742 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 20:34:54.734009  716742 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19575-710603/.minikube CaCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19575-710603/.minikube}
	I0904 20:34:54.734036  716742 ubuntu.go:177] setting up certificates
	I0904 20:34:54.734046  716742 provision.go:84] configureAuth start
	I0904 20:34:54.734112  716742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-057989
	I0904 20:34:54.752741  716742 provision.go:143] copyHostCerts
	I0904 20:34:54.752830  716742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem (1082 bytes)
	I0904 20:34:54.752951  716742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem (1123 bytes)
	I0904 20:34:54.753017  716742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem (1675 bytes)
	I0904 20:34:54.753069  716742 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem org=jenkins.addons-057989 san=[127.0.0.1 192.168.49.2 addons-057989 localhost minikube]
	I0904 20:34:55.147333  716742 provision.go:177] copyRemoteCerts
	I0904 20:34:55.147404  716742 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 20:34:55.147447  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.165454  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:55.255682  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 20:34:55.281142  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 20:34:55.305289  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 20:34:55.329497  716742 provision.go:87] duration metric: took 595.436326ms to configureAuth
	I0904 20:34:55.329576  716742 ubuntu.go:193] setting minikube options for container-runtime
	I0904 20:34:55.329784  716742 config.go:182] Loaded profile config "addons-057989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 20:34:55.329932  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.346253  716742 main.go:141] libmachine: Using SSH client type: native
	I0904 20:34:55.346495  716742 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0904 20:34:55.346516  716742 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 20:34:55.565686  716742 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 20:34:55.565710  716742 machine.go:96] duration metric: took 1.286141461s to provisionDockerMachine
	I0904 20:34:55.565720  716742 client.go:171] duration metric: took 11.225352854s to LocalClient.Create
	I0904 20:34:55.565732  716742 start.go:167] duration metric: took 11.225421054s to libmachine.API.Create "addons-057989"
	I0904 20:34:55.565740  716742 start.go:293] postStartSetup for "addons-057989" (driver="docker")
	I0904 20:34:55.565751  716742 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 20:34:55.565817  716742 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 20:34:55.565881  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.583171  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:55.671214  716742 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 20:34:55.674581  716742 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 20:34:55.674617  716742 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 20:34:55.674629  716742 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 20:34:55.674636  716742 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0904 20:34:55.674651  716742 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-710603/.minikube/addons for local assets ...
	I0904 20:34:55.674722  716742 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-710603/.minikube/files for local assets ...
	I0904 20:34:55.674750  716742 start.go:296] duration metric: took 109.004783ms for postStartSetup
	I0904 20:34:55.675068  716742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-057989
	I0904 20:34:55.690396  716742 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/config.json ...
	I0904 20:34:55.690692  716742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 20:34:55.690748  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.706620  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:55.790685  716742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 20:34:55.795488  716742 start.go:128] duration metric: took 11.458247104s to createHost
	I0904 20:34:55.795510  716742 start.go:83] releasing machines lock for "addons-057989", held for 11.458409136s
	I0904 20:34:55.795590  716742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-057989
	I0904 20:34:55.811993  716742 ssh_runner.go:195] Run: cat /version.json
	I0904 20:34:55.812023  716742 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 20:34:55.812045  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.812092  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:34:55.832455  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:55.839481  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:34:56.100314  716742 ssh_runner.go:195] Run: systemctl --version
	I0904 20:34:56.104670  716742 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 20:34:56.252641  716742 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 20:34:56.256985  716742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 20:34:56.275558  716742 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 20:34:56.275632  716742 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 20:34:56.310401  716742 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 20:34:56.310468  716742 start.go:495] detecting cgroup driver to use...
	I0904 20:34:56.310517  716742 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 20:34:56.310578  716742 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 20:34:56.326154  716742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 20:34:56.337266  716742 docker.go:217] disabling cri-docker service (if available) ...
	I0904 20:34:56.337385  716742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 20:34:56.352198  716742 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 20:34:56.367450  716742 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 20:34:56.455787  716742 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 20:34:56.554238  716742 docker.go:233] disabling docker service ...
	I0904 20:34:56.554351  716742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 20:34:56.574710  716742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 20:34:56.587825  716742 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 20:34:56.687299  716742 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 20:34:56.786601  716742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 20:34:56.799474  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 20:34:56.817328  716742 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0904 20:34:56.817397  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.827886  716742 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 20:34:56.828012  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.838976  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.849064  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.859185  716742 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 20:34:56.868303  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.878343  716742 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.894559  716742 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 20:34:56.904955  716742 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 20:34:56.914184  716742 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 20:34:56.924030  716742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:34:57.018394  716742 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 20:34:57.139627  716742 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 20:34:57.139769  716742 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 20:34:57.143458  716742 start.go:563] Will wait 60s for crictl version
	I0904 20:34:57.143551  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:34:57.146967  716742 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 20:34:57.187619  716742 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 20:34:57.187782  716742 ssh_runner.go:195] Run: crio --version
	I0904 20:34:57.230327  716742 ssh_runner.go:195] Run: crio --version
	I0904 20:34:57.274907  716742 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0904 20:34:57.276866  716742 cli_runner.go:164] Run: docker network inspect addons-057989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 20:34:57.292471  716742 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 20:34:57.296202  716742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:34:57.307224  716742 kubeadm.go:883] updating cluster {Name:addons-057989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 20:34:57.307355  716742 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 20:34:57.307428  716742 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:34:57.381955  716742 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 20:34:57.381980  716742 crio.go:433] Images already preloaded, skipping extraction
	I0904 20:34:57.382038  716742 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 20:34:57.418097  716742 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 20:34:57.418121  716742 cache_images.go:84] Images are preloaded, skipping loading
	I0904 20:34:57.418129  716742 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0904 20:34:57.418229  716742 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-057989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 20:34:57.418319  716742 ssh_runner.go:195] Run: crio config
	I0904 20:34:57.464713  716742 cni.go:84] Creating CNI manager for ""
	I0904 20:34:57.464736  716742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:34:57.464747  716742 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 20:34:57.464800  716742 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-057989 NodeName:addons-057989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 20:34:57.464994  716742 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-057989"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 20:34:57.465097  716742 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0904 20:34:57.474398  716742 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 20:34:57.474488  716742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 20:34:57.483198  716742 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0904 20:34:57.501099  716742 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 20:34:57.519783  716742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0904 20:34:57.538347  716742 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0904 20:34:57.541777  716742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 20:34:57.552363  716742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:34:57.634363  716742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:34:57.648929  716742 certs.go:68] Setting up /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989 for IP: 192.168.49.2
	I0904 20:34:57.648953  716742 certs.go:194] generating shared ca certs ...
	I0904 20:34:57.648969  716742 certs.go:226] acquiring lock for ca certs: {Name:mkc3a04cbc0797b819dd3c9fec2eaef93961640b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:57.649112  716742 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key
	I0904 20:34:58.017005  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt ...
	I0904 20:34:58.017043  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt: {Name:mkd95f3346a423afb0e8673b5e71292af3b74b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:58.017249  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key ...
	I0904 20:34:58.017258  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key: {Name:mk7f9cfa6bde577b19e8374855b89bb733281fb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:58.017337  716742 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key
	I0904 20:34:58.453146  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt ...
	I0904 20:34:58.453179  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt: {Name:mkc81f7ed4f8bfbc83feffd55dc281d29aeb677f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:58.453378  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key ...
	I0904 20:34:58.453392  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key: {Name:mkbe99405547ce66fa15a0dc370e003355394a7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:58.453479  716742 certs.go:256] generating profile certs ...
	I0904 20:34:58.453540  716742 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.key
	I0904 20:34:58.453557  716742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt with IP's: []
	I0904 20:34:59.380258  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt ...
	I0904 20:34:59.380292  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: {Name:mk7c6d25eef31d0f7545d21b444aedd95ab50fe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:59.380484  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.key ...
	I0904 20:34:59.380497  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.key: {Name:mkb45fcd6f95ce4da37194a6bfd862e0659e59dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:59.380587  716742 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key.2b1b1c52
	I0904 20:34:59.380609  716742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt.2b1b1c52 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0904 20:34:59.599935  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt.2b1b1c52 ...
	I0904 20:34:59.599969  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt.2b1b1c52: {Name:mk9bcac5f1d69cf17a003755e1f54f813baa3753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:59.600669  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key.2b1b1c52 ...
	I0904 20:34:59.600690  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key.2b1b1c52: {Name:mkfc8797a4c6c62540408d4ff8b05ec0fca2be8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:34:59.601327  716742 certs.go:381] copying /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt.2b1b1c52 -> /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt
	I0904 20:34:59.601450  716742 certs.go:385] copying /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key.2b1b1c52 -> /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key
	I0904 20:34:59.601512  716742 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.key
	I0904 20:34:59.601536  716742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.crt with IP's: []
	I0904 20:35:00.752243  716742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.crt ...
	I0904 20:35:00.752285  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.crt: {Name:mkb03587981395a01bab503d3182ecbc4b34513d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:35:00.752500  716742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.key ...
	I0904 20:35:00.752523  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.key: {Name:mk4bede3c921b1b8c749a338fcf99d9201d566d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:35:00.752724  716742 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 20:35:00.752773  716742 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem (1082 bytes)
	I0904 20:35:00.752808  716742 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem (1123 bytes)
	I0904 20:35:00.752835  716742 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem (1675 bytes)
	I0904 20:35:00.753523  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 20:35:00.789146  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 20:35:00.819640  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 20:35:00.849973  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 20:35:00.878887  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 20:35:00.907623  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 20:35:00.937179  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 20:35:00.966650  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 20:35:00.994197  716742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 20:35:01.076673  716742 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 20:35:01.103416  716742 ssh_runner.go:195] Run: openssl version
	I0904 20:35:01.109985  716742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 20:35:01.121951  716742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:35:01.125896  716742 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:35:01.125966  716742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 20:35:01.133559  716742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 20:35:01.143886  716742 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 20:35:01.147606  716742 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 20:35:01.147661  716742 kubeadm.go:392] StartCluster: {Name:addons-057989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-057989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:35:01.147757  716742 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 20:35:01.147846  716742 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 20:35:01.189885  716742 cri.go:89] found id: ""
	I0904 20:35:01.189957  716742 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 20:35:01.199399  716742 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 20:35:01.209262  716742 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0904 20:35:01.209331  716742 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 20:35:01.219268  716742 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 20:35:01.219297  716742 kubeadm.go:157] found existing configuration files:
	
	I0904 20:35:01.219416  716742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 20:35:01.229900  716742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 20:35:01.230022  716742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 20:35:01.239899  716742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 20:35:01.249771  716742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 20:35:01.249888  716742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 20:35:01.259492  716742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 20:35:01.269530  716742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 20:35:01.269617  716742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 20:35:01.278828  716742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 20:35:01.288498  716742 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 20:35:01.288597  716742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 20:35:01.297744  716742 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0904 20:35:01.340416  716742 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0904 20:35:01.340765  716742 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 20:35:01.363185  716742 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0904 20:35:01.363359  716742 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0904 20:35:01.363401  716742 kubeadm.go:310] OS: Linux
	I0904 20:35:01.363457  716742 kubeadm.go:310] CGROUPS_CPU: enabled
	I0904 20:35:01.363510  716742 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0904 20:35:01.363559  716742 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0904 20:35:01.363608  716742 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0904 20:35:01.363657  716742 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0904 20:35:01.363717  716742 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0904 20:35:01.363769  716742 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0904 20:35:01.363825  716742 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0904 20:35:01.363876  716742 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0904 20:35:01.428620  716742 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 20:35:01.428815  716742 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 20:35:01.428964  716742 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 20:35:01.438302  716742 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 20:35:01.440900  716742 out.go:235]   - Generating certificates and keys ...
	I0904 20:35:01.441007  716742 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 20:35:01.441111  716742 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 20:35:02.500413  716742 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 20:35:03.216919  716742 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 20:35:03.705499  716742 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 20:35:04.250336  716742 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 20:35:04.593576  716742 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 20:35:04.593731  716742 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-057989 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 20:35:06.225872  716742 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 20:35:06.226139  716742 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-057989 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 20:35:06.469736  716742 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 20:35:07.150890  716742 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 20:35:07.404958  716742 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 20:35:07.405173  716742 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 20:35:07.560068  716742 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 20:35:07.823478  716742 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 20:35:08.117805  716742 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 20:35:08.681466  716742 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 20:35:08.809396  716742 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 20:35:08.810004  716742 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 20:35:08.813080  716742 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 20:35:08.815884  716742 out.go:235]   - Booting up control plane ...
	I0904 20:35:08.815986  716742 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 20:35:08.816061  716742 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 20:35:08.817199  716742 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 20:35:08.827622  716742 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 20:35:08.833681  716742 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 20:35:08.833734  716742 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 20:35:08.922848  716742 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 20:35:08.922965  716742 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 20:35:09.928124  716742 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005060114s
	I0904 20:35:09.928209  716742 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0904 20:35:15.930014  716742 kubeadm.go:310] [api-check] The API server is healthy after 6.002333677s
	I0904 20:35:15.952027  716742 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 20:35:15.967084  716742 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 20:35:15.992846  716742 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 20:35:15.993056  716742 kubeadm.go:310] [mark-control-plane] Marking the node addons-057989 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 20:35:16.042250  716742 kubeadm.go:310] [bootstrap-token] Using token: mex69v.g494u4t2bbxooj6i
	I0904 20:35:16.044971  716742 out.go:235]   - Configuring RBAC rules ...
	I0904 20:35:16.045134  716742 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 20:35:16.055705  716742 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 20:35:16.065010  716742 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 20:35:16.069238  716742 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 20:35:16.073658  716742 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 20:35:16.078420  716742 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 20:35:16.342354  716742 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 20:35:16.769206  716742 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 20:35:17.338932  716742 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 20:35:17.341063  716742 kubeadm.go:310] 
	I0904 20:35:17.341135  716742 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 20:35:17.341141  716742 kubeadm.go:310] 
	I0904 20:35:17.341215  716742 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 20:35:17.341220  716742 kubeadm.go:310] 
	I0904 20:35:17.341244  716742 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 20:35:17.341301  716742 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 20:35:17.341349  716742 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 20:35:17.341354  716742 kubeadm.go:310] 
	I0904 20:35:17.341406  716742 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 20:35:17.341410  716742 kubeadm.go:310] 
	I0904 20:35:17.341456  716742 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 20:35:17.341460  716742 kubeadm.go:310] 
	I0904 20:35:17.341510  716742 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 20:35:17.341581  716742 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 20:35:17.341648  716742 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 20:35:17.341652  716742 kubeadm.go:310] 
	I0904 20:35:17.341733  716742 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 20:35:17.341807  716742 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 20:35:17.341812  716742 kubeadm.go:310] 
	I0904 20:35:17.341912  716742 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mex69v.g494u4t2bbxooj6i \
	I0904 20:35:17.342013  716742 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6a9d6c5dd15cce5623c32315b379ca4db8b8a42e6190c248e6260d57259d6bc7 \
	I0904 20:35:17.342033  716742 kubeadm.go:310] 	--control-plane 
	I0904 20:35:17.342037  716742 kubeadm.go:310] 
	I0904 20:35:17.342119  716742 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 20:35:17.342123  716742 kubeadm.go:310] 
	I0904 20:35:17.342202  716742 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mex69v.g494u4t2bbxooj6i \
	I0904 20:35:17.342301  716742 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6a9d6c5dd15cce5623c32315b379ca4db8b8a42e6190c248e6260d57259d6bc7 
	I0904 20:35:17.347001  716742 kubeadm.go:310] W0904 20:35:01.336876    1192 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0904 20:35:17.347293  716742 kubeadm.go:310] W0904 20:35:01.337821    1192 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0904 20:35:17.347501  716742 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0904 20:35:17.347608  716742 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 20:35:17.347628  716742 cni.go:84] Creating CNI manager for ""
	I0904 20:35:17.347636  716742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:35:17.349758  716742 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0904 20:35:17.351534  716742 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0904 20:35:17.355486  716742 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0904 20:35:17.355512  716742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0904 20:35:17.375340  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 20:35:17.651571  716742 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 20:35:17.651721  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:17.651808  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-057989 minikube.k8s.io/updated_at=2024_09_04T20_35_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af minikube.k8s.io/name=addons-057989 minikube.k8s.io/primary=true
	I0904 20:35:17.806018  716742 ops.go:34] apiserver oom_adj: -16
	I0904 20:35:17.806125  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:18.306818  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:18.807224  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:19.306261  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:19.806958  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:20.307070  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:20.806301  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:21.306858  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:21.806305  716742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 20:35:21.906144  716742 kubeadm.go:1113] duration metric: took 4.254470959s to wait for elevateKubeSystemPrivileges
	I0904 20:35:21.906172  716742 kubeadm.go:394] duration metric: took 20.758516173s to StartCluster
	I0904 20:35:21.906191  716742 settings.go:142] acquiring lock: {Name:mk78ce0fd69886ee058af8e675a61cdabc51cba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:35:21.906305  716742 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 20:35:21.906748  716742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/kubeconfig: {Name:mk99c3c6b541fdaa941aef3f7a9cb265a3595a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 20:35:21.906950  716742 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 20:35:21.907130  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 20:35:21.907400  716742 config.go:182] Loaded profile config "addons-057989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 20:35:21.907438  716742 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0904 20:35:21.907532  716742 addons.go:69] Setting yakd=true in profile "addons-057989"
	I0904 20:35:21.907555  716742 addons.go:234] Setting addon yakd=true in "addons-057989"
	I0904 20:35:21.907606  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.908075  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.908362  716742 addons.go:69] Setting inspektor-gadget=true in profile "addons-057989"
	I0904 20:35:21.908387  716742 addons.go:234] Setting addon inspektor-gadget=true in "addons-057989"
	I0904 20:35:21.908419  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.908817  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.909134  716742 addons.go:69] Setting metrics-server=true in profile "addons-057989"
	I0904 20:35:21.909160  716742 addons.go:234] Setting addon metrics-server=true in "addons-057989"
	I0904 20:35:21.909185  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.909577  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.912812  716742 addons.go:69] Setting cloud-spanner=true in profile "addons-057989"
	I0904 20:35:21.912905  716742 addons.go:234] Setting addon cloud-spanner=true in "addons-057989"
	I0904 20:35:21.912980  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.913489  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913871  716742 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-057989"
	I0904 20:35:21.928862  716742 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-057989"
	I0904 20:35:21.928901  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.929448  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913884  716742 addons.go:69] Setting default-storageclass=true in profile "addons-057989"
	I0904 20:35:21.943849  716742 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-057989"
	I0904 20:35:21.944216  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913889  716742 addons.go:69] Setting gcp-auth=true in profile "addons-057989"
	I0904 20:35:21.955803  716742 mustload.go:65] Loading cluster: addons-057989
	I0904 20:35:21.956040  716742 config.go:182] Loaded profile config "addons-057989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 20:35:21.956384  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913901  716742 addons.go:69] Setting ingress=true in profile "addons-057989"
	I0904 20:35:21.966350  716742 addons.go:234] Setting addon ingress=true in "addons-057989"
	I0904 20:35:21.966464  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.969340  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.913908  716742 addons.go:69] Setting ingress-dns=true in profile "addons-057989"
	I0904 20:35:21.971332  716742 addons.go:234] Setting addon ingress-dns=true in "addons-057989"
	I0904 20:35:21.971431  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:21.971919  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.916107  716742 out.go:177] * Verifying Kubernetes components...
	I0904 20:35:21.999362  716742 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0904 20:35:21.917099  716742 addons.go:69] Setting storage-provisioner=true in profile "addons-057989"
	I0904 20:35:21.917117  716742 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-057989"
	I0904 20:35:21.917124  716742 addons.go:69] Setting registry=true in profile "addons-057989"
	I0904 20:35:22.006635  716742 addons.go:234] Setting addon registry=true in "addons-057989"
	I0904 20:35:21.917132  716742 addons.go:69] Setting volcano=true in profile "addons-057989"
	I0904 20:35:21.917139  716742 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-057989"
	I0904 20:35:22.006816  716742 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-057989"
	I0904 20:35:22.026442  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:21.917152  716742 addons.go:69] Setting volumesnapshots=true in profile "addons-057989"
	I0904 20:35:22.041769  716742 addons.go:234] Setting addon volumesnapshots=true in "addons-057989"
	I0904 20:35:22.041853  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.042370  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.026070  716742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 20:35:22.006570  716742 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-057989"
	I0904 20:35:22.052266  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.053017  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.026105  716742 addons.go:234] Setting addon storage-provisioner=true in "addons-057989"
	I0904 20:35:22.060087  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.006737  716742 addons.go:234] Setting addon volcano=true in "addons-057989"
	I0904 20:35:22.062161  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.062789  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.026143  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.068254  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0904 20:35:22.068273  716742 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0904 20:35:22.068332  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.080628  716742 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0904 20:35:22.083423  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.096846  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.117959  716742 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0904 20:35:22.117983  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0904 20:35:22.118049  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.132214  716742 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0904 20:35:22.133366  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.138078  716742 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0904 20:35:22.138539  716742 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0904 20:35:22.138556  716742 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0904 20:35:22.138623  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.166630  716742 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 20:35:22.166698  716742 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 20:35:22.166806  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.187781  716742 addons.go:234] Setting addon default-storageclass=true in "addons-057989"
	I0904 20:35:22.187824  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.190235  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.232774  716742 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0904 20:35:22.247605  716742 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:35:22.247675  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0904 20:35:22.247754  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.264137  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0904 20:35:22.299532  716742 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 20:35:22.266024  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 20:35:22.300947  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	W0904 20:35:22.302072  716742 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0904 20:35:22.306600  716742 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0904 20:35:22.308467  716742 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:35:22.308490  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0904 20:35:22.308561  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.309318  716742 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 20:35:22.313375  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0904 20:35:22.317741  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0904 20:35:22.317922  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0904 20:35:22.317937  716742 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0904 20:35:22.318014  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.330389  716742 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-057989"
	I0904 20:35:22.330438  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:22.330871  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:22.340063  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0904 20:35:22.340202  716742 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 20:35:22.340246  716742 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0904 20:35:22.342136  716742 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:35:22.342209  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 20:35:22.342312  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.352340  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0904 20:35:22.352873  716742 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:35:22.352890  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0904 20:35:22.352954  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.359146  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0904 20:35:22.359286  716742 out.go:177]   - Using image docker.io/registry:2.8.3
	I0904 20:35:22.360580  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.364471  716742 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0904 20:35:22.364583  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0904 20:35:22.369625  716742 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0904 20:35:22.369649  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0904 20:35:22.369712  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.374826  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0904 20:35:22.376699  716742 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0904 20:35:22.378465  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0904 20:35:22.378495  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0904 20:35:22.378570  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.406541  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.423195  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.433564  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.450279  716742 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 20:35:22.450300  716742 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 20:35:22.450362  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.497954  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.502029  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.528022  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.537365  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.538274  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.539752  716742 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0904 20:35:22.543162  716742 out.go:177]   - Using image docker.io/busybox:stable
	I0904 20:35:22.544052  716742 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 20:35:22.545019  716742 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:35:22.545046  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0904 20:35:22.545106  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:22.545573  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.561427  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	W0904 20:35:22.564313  716742 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0904 20:35:22.564341  716742 retry.go:31] will retry after 236.417212ms: ssh: handshake failed: EOF
	I0904 20:35:22.575605  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:22.736511  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0904 20:35:22.736588  716742 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0904 20:35:22.857432  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0904 20:35:22.895253  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0904 20:35:22.895325  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0904 20:35:22.905358  716742 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0904 20:35:22.905429  716742 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0904 20:35:22.907620  716742 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 20:35:22.907687  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0904 20:35:22.925747  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 20:35:22.930451  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 20:35:22.934382  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0904 20:35:22.934456  716742 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0904 20:35:22.971971  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0904 20:35:22.972050  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0904 20:35:22.989798  716742 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:35:22.989897  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0904 20:35:22.992576  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 20:35:22.994880  716742 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0904 20:35:22.994953  716742 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0904 20:35:23.022131  716742 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0904 20:35:23.022204  716742 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0904 20:35:23.038813  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 20:35:23.070590  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0904 20:35:23.070665  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0904 20:35:23.070896  716742 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 20:35:23.070951  716742 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0904 20:35:23.095955  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 20:35:23.124642  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 20:35:23.186119  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0904 20:35:23.186188  716742 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0904 20:35:23.224395  716742 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0904 20:35:23.224470  716742 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0904 20:35:23.228123  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0904 20:35:23.241258  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0904 20:35:23.241328  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0904 20:35:23.258884  716742 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0904 20:35:23.258919  716742 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0904 20:35:23.288949  716742 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:35:23.288988  716742 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 20:35:23.332003  716742 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:35:23.332068  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0904 20:35:23.378638  716742 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0904 20:35:23.378709  716742 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0904 20:35:23.434112  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0904 20:35:23.459281  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0904 20:35:23.459356  716742 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0904 20:35:23.472898  716742 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0904 20:35:23.472962  716742 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0904 20:35:23.482949  716742 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0904 20:35:23.483014  716742 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0904 20:35:23.532337  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 20:35:23.556346  716742 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0904 20:35:23.556420  716742 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0904 20:35:23.626276  716742 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0904 20:35:23.626351  716742 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0904 20:35:23.629728  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0904 20:35:23.629797  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0904 20:35:23.681098  716742 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:35:23.681173  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0904 20:35:23.724178  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0904 20:35:23.724253  716742 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0904 20:35:23.735934  716742 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0904 20:35:23.736008  716742 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0904 20:35:23.757739  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:35:23.792226  716742 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0904 20:35:23.792306  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0904 20:35:23.835787  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0904 20:35:23.835863  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0904 20:35:23.900406  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0904 20:35:23.962593  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0904 20:35:23.962673  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0904 20:35:24.071188  716742 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:35:24.071310  716742 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0904 20:35:24.264803  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 20:35:26.048113  716742 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.748363933s)
	I0904 20:35:26.048143  716742 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0904 20:35:26.048457  716742 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.504382184s)
	I0904 20:35:26.050163  716742 node_ready.go:35] waiting up to 6m0s for node "addons-057989" to be "Ready" ...
	I0904 20:35:26.688497  716742 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-057989" context rescaled to 1 replicas
	I0904 20:35:27.060768  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.203246964s)
	I0904 20:35:27.060885  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.135070848s)
	I0904 20:35:28.027815  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.035163919s)
	I0904 20:35:28.027966  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.097438105s)
	I0904 20:35:28.028181  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.989297484s)
	I0904 20:35:28.075309  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:29.030465  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.934428715s)
	I0904 20:35:29.030503  716742 addons.go:475] Verifying addon ingress=true in "addons-057989"
	I0904 20:35:29.030752  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.906040067s)
	I0904 20:35:29.031105  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.802909693s)
	I0904 20:35:29.031127  716742 addons.go:475] Verifying addon registry=true in "addons-057989"
	I0904 20:35:29.031230  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.498818017s)
	I0904 20:35:29.031251  716742 addons.go:475] Verifying addon metrics-server=true in "addons-057989"
	I0904 20:35:29.031166  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.596978997s)
	I0904 20:35:29.033224  716742 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-057989 service yakd-dashboard -n yakd-dashboard
	
	I0904 20:35:29.033237  716742 out.go:177] * Verifying ingress addon...
	I0904 20:35:29.033252  716742 out.go:177] * Verifying registry addon...
	I0904 20:35:29.037301  716742 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0904 20:35:29.038325  716742 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0904 20:35:29.097659  716742 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 20:35:29.097684  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:29.098201  716742 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 20:35:29.098256  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:29.195841  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.438010547s)
	W0904 20:35:29.195882  716742 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:35:29.195933  716742 retry.go:31] will retry after 325.249505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 20:35:29.196029  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.295502009s)
	I0904 20:35:29.437031  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.172116276s)
	I0904 20:35:29.437112  716742 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-057989"
	I0904 20:35:29.440223  716742 out.go:177] * Verifying csi-hostpath-driver addon...
	I0904 20:35:29.443551  716742 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0904 20:35:29.448779  716742 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 20:35:29.448848  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:29.521627  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 20:35:29.578076  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:29.580777  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:29.958657  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:30.072439  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:30.090922  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:30.097632  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:30.448445  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:30.549658  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:30.550969  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:30.807018  716742 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.285307799s)
	I0904 20:35:30.955804  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:31.046561  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:31.047254  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:31.448608  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:31.549691  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:31.550127  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:31.947929  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:32.048692  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:32.051282  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:32.448193  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:32.549459  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:32.552498  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:32.555459  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:32.731032  716742 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0904 20:35:32.731194  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:32.758991  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:32.884535  716742 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0904 20:35:32.939016  716742 addons.go:234] Setting addon gcp-auth=true in "addons-057989"
	I0904 20:35:32.939069  716742 host.go:66] Checking if "addons-057989" exists ...
	I0904 20:35:32.939529  716742 cli_runner.go:164] Run: docker container inspect addons-057989 --format={{.State.Status}}
	I0904 20:35:32.953253  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:32.961330  716742 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0904 20:35:32.961383  716742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-057989
	I0904 20:35:32.995512  716742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/addons-057989/id_rsa Username:docker}
	I0904 20:35:33.054398  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:33.055434  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:33.128086  716742 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0904 20:35:33.129944  716742 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 20:35:33.131593  716742 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0904 20:35:33.131723  716742 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0904 20:35:33.165445  716742 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0904 20:35:33.165474  716742 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0904 20:35:33.188075  716742 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:35:33.188102  716742 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0904 20:35:33.209359  716742 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 20:35:33.448116  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:33.543925  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:33.544466  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:33.820050  716742 addons.go:475] Verifying addon gcp-auth=true in "addons-057989"
	I0904 20:35:33.822068  716742 out.go:177] * Verifying gcp-auth addon...
	I0904 20:35:33.824433  716742 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0904 20:35:33.848245  716742 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0904 20:35:33.848266  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:33.947852  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:34.042923  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:34.043825  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:34.328229  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:34.447947  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:34.540311  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:34.542876  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:34.828284  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:34.947534  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:35.042636  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:35.049332  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:35.055122  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:35.330404  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:35.447274  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:35.545340  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:35.546029  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:35.828501  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:35.947228  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:36.063530  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:36.064339  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:36.328724  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:36.448339  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:36.540949  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:36.542310  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:36.827715  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:36.947278  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:37.043049  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:37.043853  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:37.327744  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:37.447836  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:37.540336  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:37.542627  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:37.554000  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:37.828004  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:37.947413  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:38.040942  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:38.043212  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:38.328167  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:38.448210  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:38.540971  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:38.542858  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:38.828183  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:38.948377  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:39.040766  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:39.043219  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:39.327632  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:39.447836  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:39.540398  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:39.542110  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:39.827478  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:39.946894  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:40.070189  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:40.070554  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:40.076822  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:40.328253  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:40.447552  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:40.541741  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:40.542536  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:40.827594  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:40.947071  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:41.040958  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:41.042547  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:41.328122  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:41.447352  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:41.542033  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:41.542923  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:41.828247  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:41.947505  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:42.042016  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:42.042953  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:42.328660  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:42.449670  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:42.541520  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:42.542951  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:42.554073  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:42.828426  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:42.946914  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:43.041666  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:43.043506  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:43.327678  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:43.447222  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:43.540542  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:43.542481  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:43.827880  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:43.947351  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:44.041905  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:44.042784  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:44.327910  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:44.447338  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:44.540692  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:44.541928  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:44.554128  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:44.827401  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:44.947350  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:45.041914  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:45.046397  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:45.329483  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:45.447350  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:45.541340  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:45.542595  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:45.829160  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:45.947522  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:46.043569  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:46.043935  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:46.327999  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:46.446905  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:46.540678  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:46.542173  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:46.554289  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:46.828320  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:46.947897  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:47.041203  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:47.043417  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:47.327978  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:47.447928  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:47.541679  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:47.542621  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:47.827779  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:47.947381  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:48.042174  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:48.043408  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:48.327968  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:48.447259  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:48.540821  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:48.543687  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:48.829248  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:48.947876  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:49.040634  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:49.042550  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:49.053837  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:49.328536  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:49.446861  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:49.542397  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:49.542833  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:49.828476  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:49.947595  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:50.041512  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:50.045789  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:50.327937  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:50.447163  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:50.541487  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:50.542220  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:50.827853  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:50.947721  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:51.046096  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:51.047230  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:51.054679  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:51.328388  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:51.447569  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:51.541997  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:51.543417  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:51.827469  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:51.947631  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:52.041212  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:52.042374  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:52.328608  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:52.447910  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:52.540711  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:52.542265  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:52.827999  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:52.947770  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:53.041301  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:53.044146  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:53.328745  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:53.448068  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:53.541574  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:53.542484  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:53.553703  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:53.828116  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:53.948990  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:54.053696  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:54.054631  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:54.328538  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:54.447218  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:54.540711  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:54.542591  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:54.828305  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:54.948206  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:55.053401  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:55.058261  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:55.328289  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:55.447828  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:55.542014  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:55.543655  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:55.553772  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:55.827440  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:55.953905  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:56.041609  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:56.042737  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:56.327941  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:56.447334  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:56.541509  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:56.542497  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:56.828523  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:56.948026  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:57.040845  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:57.043079  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:57.329593  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:57.447143  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:57.542303  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:57.543506  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:57.553805  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:35:57.827884  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:57.946797  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:58.042695  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:58.043278  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:58.328009  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:58.447726  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:58.545605  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:58.548895  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:58.828520  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:58.947520  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:59.041341  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:59.042204  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:59.327308  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:59.447465  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:35:59.542482  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:35:59.542723  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:35:59.827941  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:35:59.946996  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:00.043176  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:00.104534  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:00.105059  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:36:00.340965  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:00.449041  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:00.551366  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:00.579002  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:00.828953  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:00.947554  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:01.040695  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:01.042741  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:01.328717  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:01.447395  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:01.540951  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:01.543418  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:01.828763  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:01.947431  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:02.046929  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:02.047190  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:02.327715  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:02.447709  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:02.541086  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:02.542605  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:02.553881  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:36:02.828768  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:02.947541  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:03.042304  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:03.043125  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:03.328588  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:03.448032  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:03.541497  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:03.544175  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:03.828676  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:03.947371  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:04.044486  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:04.045078  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:04.327521  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:04.447671  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:04.541369  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:04.542203  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:04.828334  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:04.946895  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:05.045224  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:05.055073  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:05.062604  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:36:05.327935  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:05.447801  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:05.542368  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:05.542768  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:05.827526  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:05.947604  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:06.048078  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:06.049923  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:06.330381  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:06.447521  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:06.542229  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:06.542596  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:06.828109  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:06.947118  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:07.040788  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:07.042886  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:07.328337  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:07.447866  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:07.541481  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:07.542481  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:07.553750  716742 node_ready.go:53] node "addons-057989" has status "Ready":"False"
	I0904 20:36:07.827705  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:07.948008  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:08.042417  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:08.042859  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:08.328090  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:08.447525  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:08.540604  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:08.543433  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:08.837623  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:09.030014  716742 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 20:36:09.030047  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:09.071445  716742 node_ready.go:49] node "addons-057989" has status "Ready":"True"
	I0904 20:36:09.071472  716742 node_ready.go:38] duration metric: took 43.021269395s for node "addons-057989" to be "Ready" ...
	I0904 20:36:09.071484  716742 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 20:36:09.089105  716742 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 20:36:09.089125  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:09.090847  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:09.095775  716742 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-k9k5f" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:09.350645  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:09.533278  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:09.574617  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:09.575871  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:09.828155  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:09.978224  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:10.129591  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:10.131040  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:10.143649  716742 pod_ready.go:93] pod "coredns-6f6b679f8f-k9k5f" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.143680  716742 pod_ready.go:82] duration metric: took 1.047863266s for pod "coredns-6f6b679f8f-k9k5f" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.143707  716742 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.159088  716742 pod_ready.go:93] pod "etcd-addons-057989" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.159117  716742 pod_ready.go:82] duration metric: took 15.402507ms for pod "etcd-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.159133  716742 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.171488  716742 pod_ready.go:93] pod "kube-apiserver-addons-057989" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.171518  716742 pod_ready.go:82] duration metric: took 12.375537ms for pod "kube-apiserver-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.171532  716742 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.178648  716742 pod_ready.go:93] pod "kube-controller-manager-addons-057989" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.178676  716742 pod_ready.go:82] duration metric: took 7.13601ms for pod "kube-controller-manager-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.178691  716742 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nc7jl" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.254762  716742 pod_ready.go:93] pod "kube-proxy-nc7jl" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.254796  716742 pod_ready.go:82] duration metric: took 76.096913ms for pod "kube-proxy-nc7jl" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.254811  716742 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.328843  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:10.449765  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:10.544678  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:10.545672  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:10.654072  716742 pod_ready.go:93] pod "kube-scheduler-addons-057989" in "kube-system" namespace has status "Ready":"True"
	I0904 20:36:10.654096  716742 pod_ready.go:82] duration metric: took 399.277222ms for pod "kube-scheduler-addons-057989" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.654108  716742 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace to be "Ready" ...
	I0904 20:36:10.829645  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:10.950366  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:11.050101  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:11.050780  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:11.328571  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:11.449457  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:11.542566  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:11.546248  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:11.830499  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:11.950704  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:12.054205  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:12.055128  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:12.327897  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:12.449726  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:12.543917  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:12.544796  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:12.661369  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:12.827882  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:12.949012  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:13.043831  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:13.046158  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:13.329536  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:13.450676  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:13.545035  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:13.545415  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:13.830985  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:13.948606  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:14.042504  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:14.048344  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:14.328120  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:14.450281  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:14.542904  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:14.544350  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:14.829872  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:14.951552  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:15.047571  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:15.048435  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:15.169903  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:15.329914  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:15.449286  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:15.548925  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:15.549916  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:15.828156  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:15.948710  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:16.055199  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:16.057193  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:16.328604  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:16.448365  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:16.542507  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:16.543480  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:16.828454  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:16.949666  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:17.043797  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:17.044826  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:17.329042  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:17.448956  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:17.550653  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:17.552483  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:17.662444  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:17.828887  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:17.949344  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:18.076252  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:18.077448  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:18.329262  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:18.450325  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:18.542144  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:18.544796  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:18.829083  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:18.949802  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:19.044416  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:19.045190  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:19.328890  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:19.449574  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:19.544186  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:19.544394  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:19.835752  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:19.949187  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:20.048527  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:20.049968  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:20.178776  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:20.328791  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:20.449953  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:20.555512  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:20.556916  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:20.831683  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:20.948574  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:21.044140  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:21.048581  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:21.329130  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:21.450562  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:21.549963  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:21.550903  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:21.829631  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:21.949015  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:22.046374  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:22.047617  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:22.328800  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:22.449023  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:22.542511  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:22.544223  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:22.660408  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:22.828655  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:22.949624  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:23.044461  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:23.046219  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:23.328978  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:23.448751  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:23.545036  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:23.546547  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:23.828770  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:23.949131  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:24.044659  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:24.044828  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:24.328713  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:24.448992  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:24.543975  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:24.544525  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:24.828665  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:24.948789  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:25.044058  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:25.045416  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:25.177094  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:25.329520  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:25.448758  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:25.544315  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:25.546862  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:25.829047  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:25.949309  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:26.042483  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:26.042805  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:26.327729  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:26.448903  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:26.546711  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:26.551153  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:26.829442  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:26.949733  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:27.046187  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:27.046298  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:27.328645  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:27.450636  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:27.546923  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:27.548930  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:27.661472  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:27.831563  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:27.951278  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:28.048305  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:28.050473  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:28.327740  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:28.448212  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:28.541352  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:28.544411  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:28.829661  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:28.949150  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:29.044402  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:29.045775  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:29.329086  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:29.460355  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:29.544885  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:29.547005  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:29.661692  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:29.829876  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:29.949946  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:30.054365  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:30.068184  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:30.329170  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:30.450074  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:30.544607  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:30.545699  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:30.828795  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:30.951635  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:31.043816  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:31.045286  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:31.329490  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:31.449348  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:31.543204  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:31.544300  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:31.661927  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:31.828620  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:31.950251  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:32.046442  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:32.048445  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:32.330831  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:32.449125  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:32.542854  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:32.543884  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:32.828558  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:32.948725  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:33.042434  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:33.043885  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:33.330491  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:33.448583  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:33.542414  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:33.542780  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:33.828458  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:33.949029  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:34.055376  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:34.056938  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:34.167167  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:34.329045  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:34.453047  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:34.543030  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:34.546398  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:34.829227  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:34.949551  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:35.050127  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:35.053256  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:35.329695  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:35.448676  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:35.542169  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:35.544984  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:35.828200  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:35.948869  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:36.044814  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:36.057927  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:36.335920  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:36.450101  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:36.541680  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:36.543120  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:36.662952  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:36.828448  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:36.948348  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:37.066077  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:37.066409  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:37.328407  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:37.448523  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:37.542789  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:37.543300  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:37.828403  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:37.949498  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:38.043389  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:38.046624  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:38.328771  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:38.448599  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:38.542461  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:38.544745  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:38.828115  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:38.952286  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:39.042378  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:39.044673  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:39.174566  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:39.334517  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:39.453611  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:39.543024  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:39.544086  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:39.828352  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:39.950126  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:40.056483  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:40.061766  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:40.328992  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:40.461098  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:40.544321  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:40.546350  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:40.828978  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:40.948665  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:41.051314  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:41.058138  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:41.328917  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:41.449551  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:41.550313  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:41.551220  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:41.664986  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:41.828514  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:41.952465  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:42.058001  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:42.059479  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:42.331960  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:42.454692  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:42.546320  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:42.547473  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:42.829791  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:42.950245  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:43.044498  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:43.046219  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:43.328405  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:43.454381  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:43.543453  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:43.544110  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:43.828245  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:43.948674  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:44.042409  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:44.043849  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:44.162551  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:44.328009  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:44.448599  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:44.543132  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:44.545358  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:44.828792  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:44.948747  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:45.087640  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:45.088284  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:45.344816  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:45.449644  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:45.544359  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:45.545990  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:45.829341  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:45.949813  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:46.058111  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:46.058856  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:46.170340  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:46.328474  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:46.449570  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:46.544097  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:46.545673  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:46.829642  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:46.949053  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:47.044822  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:47.046478  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:47.328302  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:47.449782  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:47.546336  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:47.551098  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:47.831960  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:47.949636  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:48.060368  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:48.060851  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:48.329136  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:48.449568  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:48.544057  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:48.544859  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:48.663951  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:48.829109  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:48.949424  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:49.044052  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:49.045593  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:49.333734  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:49.450504  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:49.545632  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:49.547608  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:49.828093  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:49.950097  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:50.069427  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:50.086326  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:50.331658  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:50.449560  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:50.542436  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:50.547014  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:50.832752  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:50.952128  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:51.047186  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:51.050572  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:51.163570  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:51.328417  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:51.449441  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:51.544087  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:51.544380  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:51.829033  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:51.949061  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:52.045024  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:52.045918  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:52.328926  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:52.448578  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:52.541879  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:52.542126  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:52.830078  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:52.963921  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:53.041818  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:53.044116  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:53.170106  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:53.328609  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:53.448820  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:53.544307  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:53.545486  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:53.829016  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:53.948618  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:54.041664  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:54.044249  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:54.328767  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:54.449124  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:54.541717  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 20:36:54.543353  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:54.828607  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:54.948470  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:55.051321  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:55.054553  716742 kapi.go:107] duration metric: took 1m26.017250397s to wait for kubernetes.io/minikube-addons=registry ...
	I0904 20:36:55.328521  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:55.448501  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:55.543065  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:55.660302  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:55.827945  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:55.951015  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:56.045384  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:56.332183  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:56.449054  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:56.546373  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:56.830492  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:56.949915  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:57.044356  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:57.328294  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:57.449445  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:57.544569  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:57.674148  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:36:57.830780  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:57.950151  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:58.046701  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:58.331040  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:58.448712  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:58.544968  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:58.828987  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:58.948346  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:59.042906  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:59.332382  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:59.449048  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:36:59.543704  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:36:59.832339  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:36:59.948917  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:00.044377  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:00.222015  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:00.329414  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:00.487456  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:00.546702  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:00.828580  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:00.950943  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:01.043973  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:01.330576  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:01.448174  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:01.542648  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:01.860366  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:01.965465  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:02.052101  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:02.333323  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:02.449358  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:02.543801  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:02.669696  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:02.838776  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:02.951038  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:03.049985  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:03.327773  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:03.448941  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:03.542817  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:03.828613  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:03.948689  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:04.046396  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:04.328336  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:04.449088  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:04.543775  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:04.828884  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:04.952546  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:05.068506  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:05.207353  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:05.328427  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:05.448931  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:05.543510  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:05.828526  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:05.951448  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:06.047048  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:06.328134  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:06.449585  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:06.543072  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:06.829018  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:06.950732  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:07.044010  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:07.328664  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:07.448981  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:07.542915  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:07.661203  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:07.828185  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:07.953699  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:08.043242  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:08.328648  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:08.448488  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:08.543420  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:08.827853  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:08.956859  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:09.045880  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:09.335624  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:09.450095  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:09.543668  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:09.835572  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:09.950374  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:10.053252  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:10.166375  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:10.328364  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:10.449574  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:10.543407  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:10.828502  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:10.949028  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:11.042952  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:11.329417  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:11.453866  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:11.544063  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:11.829066  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:11.950959  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:12.047688  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:12.178429  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:12.337289  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:12.449041  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 20:37:12.543497  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:12.829095  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:12.951547  716742 kapi.go:107] duration metric: took 1m43.507997526s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0904 20:37:13.048935  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:13.335226  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:13.543922  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:13.828005  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:14.043281  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:14.328415  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:14.543294  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:14.660776  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:14.828685  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:15.072923  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:15.330103  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:15.542664  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:15.829204  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:16.058275  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:16.328850  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:16.542954  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:16.828673  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:17.042581  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:17.163249  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:17.327534  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:17.543624  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:17.827957  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:18.045331  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:18.329952  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:18.543428  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:18.829780  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:19.043830  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:19.168137  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:19.328673  716742 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 20:37:19.544966  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:19.828409  716742 kapi.go:107] duration metric: took 1m46.003970775s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0904 20:37:19.830433  716742 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-057989 cluster.
	I0904 20:37:19.832052  716742 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0904 20:37:19.833771  716742 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0904 20:37:20.045906  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:20.556992  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:21.043726  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:21.173518  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:21.543224  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:22.045682  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:22.543970  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:23.045347  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:23.174483  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:23.543944  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:24.045448  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:24.544024  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:25.074393  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:25.543997  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:25.661982  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:26.066374  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:26.543191  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:27.043090  716742 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 20:37:27.542940  716742 kapi.go:107] duration metric: took 1m58.504613312s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0904 20:37:27.545069  716742 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0904 20:37:27.546958  716742 addons.go:510] duration metric: took 2m5.639519055s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner storage-provisioner-rancher metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0904 20:37:28.163426  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:30.166696  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:32.661223  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:35.164273  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:37.660899  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:40.164467  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:42.166438  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:44.661375  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:46.662396  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:48.662560  716742 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"False"
	I0904 20:37:49.660114  716742 pod_ready.go:93] pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace has status "Ready":"True"
	I0904 20:37:49.660145  716742 pod_ready.go:82] duration metric: took 1m39.006028182s for pod "metrics-server-84c5f94fbc-fq2ps" in "kube-system" namespace to be "Ready" ...
	I0904 20:37:49.660158  716742 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-hxn5k" in "kube-system" namespace to be "Ready" ...
	I0904 20:37:49.666071  716742 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-hxn5k" in "kube-system" namespace has status "Ready":"True"
	I0904 20:37:49.666099  716742 pod_ready.go:82] duration metric: took 5.93149ms for pod "nvidia-device-plugin-daemonset-hxn5k" in "kube-system" namespace to be "Ready" ...
	I0904 20:37:49.666121  716742 pod_ready.go:39] duration metric: took 1m40.594604615s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 20:37:49.666138  716742 api_server.go:52] waiting for apiserver process to appear ...
	I0904 20:37:49.666166  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0904 20:37:49.666227  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0904 20:37:49.723728  716742 cri.go:89] found id: "8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:37:49.723759  716742 cri.go:89] found id: ""
	I0904 20:37:49.723767  716742 logs.go:276] 1 containers: [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59]
	I0904 20:37:49.723827  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.727548  716742 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0904 20:37:49.727628  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0904 20:37:49.775692  716742 cri.go:89] found id: "4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:37:49.775716  716742 cri.go:89] found id: ""
	I0904 20:37:49.775725  716742 logs.go:276] 1 containers: [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971]
	I0904 20:37:49.775781  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.779581  716742 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0904 20:37:49.779678  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0904 20:37:49.819669  716742 cri.go:89] found id: "2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:37:49.819693  716742 cri.go:89] found id: ""
	I0904 20:37:49.819702  716742 logs.go:276] 1 containers: [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b]
	I0904 20:37:49.819758  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.823267  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0904 20:37:49.823362  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0904 20:37:49.862094  716742 cri.go:89] found id: "d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:37:49.862116  716742 cri.go:89] found id: ""
	I0904 20:37:49.862124  716742 logs.go:276] 1 containers: [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608]
	I0904 20:37:49.862225  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.865865  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0904 20:37:49.865988  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0904 20:37:49.907687  716742 cri.go:89] found id: "13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:37:49.907711  716742 cri.go:89] found id: ""
	I0904 20:37:49.907720  716742 logs.go:276] 1 containers: [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5]
	I0904 20:37:49.907804  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.911524  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0904 20:37:49.911619  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0904 20:37:49.963560  716742 cri.go:89] found id: "7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:37:49.963588  716742 cri.go:89] found id: ""
	I0904 20:37:49.963595  716742 logs.go:276] 1 containers: [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743]
	I0904 20:37:49.963722  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:49.967436  716742 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0904 20:37:49.967512  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0904 20:37:50.027766  716742 cri.go:89] found id: "508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:37:50.027790  716742 cri.go:89] found id: ""
	I0904 20:37:50.027799  716742 logs.go:276] 1 containers: [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706]
	I0904 20:37:50.027863  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:37:50.049546  716742 logs.go:123] Gathering logs for dmesg ...
	I0904 20:37:50.049571  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 20:37:50.105332  716742 logs.go:123] Gathering logs for coredns [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b] ...
	I0904 20:37:50.105413  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:37:50.221432  716742 logs.go:123] Gathering logs for kube-scheduler [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608] ...
	I0904 20:37:50.221473  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:37:50.276905  716742 logs.go:123] Gathering logs for CRI-O ...
	I0904 20:37:50.276941  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0904 20:37:50.382777  716742 logs.go:123] Gathering logs for kube-controller-manager [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743] ...
	I0904 20:37:50.382817  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:37:50.476194  716742 logs.go:123] Gathering logs for kindnet [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706] ...
	I0904 20:37:50.476232  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:37:50.522576  716742 logs.go:123] Gathering logs for container status ...
	I0904 20:37:50.522612  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 20:37:50.578692  716742 logs.go:123] Gathering logs for kubelet ...
	I0904 20:37:50.578725  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0904 20:37:50.609335  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:37:50.609580  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:37:50.671992  716742 logs.go:123] Gathering logs for describe nodes ...
	I0904 20:37:50.672029  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 20:37:50.867431  716742 logs.go:123] Gathering logs for kube-apiserver [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59] ...
	I0904 20:37:50.867459  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:37:50.932411  716742 logs.go:123] Gathering logs for etcd [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971] ...
	I0904 20:37:50.932448  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:37:51.003529  716742 logs.go:123] Gathering logs for kube-proxy [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5] ...
	I0904 20:37:51.003585  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:37:51.053994  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:37:51.054031  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0904 20:37:51.054160  716742 out.go:270] X Problems detected in kubelet:
	W0904 20:37:51.054202  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:37:51.054228  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:37:51.054237  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:37:51.054276  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:38:01.054577  716742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 20:38:01.068412  716742 api_server.go:72] duration metric: took 2m39.161430852s to wait for apiserver process to appear ...
	I0904 20:38:01.068486  716742 api_server.go:88] waiting for apiserver healthz status ...
	I0904 20:38:01.068530  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0904 20:38:01.068610  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0904 20:38:01.106967  716742 cri.go:89] found id: "8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:38:01.106991  716742 cri.go:89] found id: ""
	I0904 20:38:01.106998  716742 logs.go:276] 1 containers: [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59]
	I0904 20:38:01.107057  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.110991  716742 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0904 20:38:01.111071  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0904 20:38:01.160285  716742 cri.go:89] found id: "4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:38:01.160309  716742 cri.go:89] found id: ""
	I0904 20:38:01.160316  716742 logs.go:276] 1 containers: [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971]
	I0904 20:38:01.160377  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.164548  716742 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0904 20:38:01.164621  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0904 20:38:01.214500  716742 cri.go:89] found id: "2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:38:01.214527  716742 cri.go:89] found id: ""
	I0904 20:38:01.214536  716742 logs.go:276] 1 containers: [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b]
	I0904 20:38:01.214599  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.218732  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0904 20:38:01.218808  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0904 20:38:01.261426  716742 cri.go:89] found id: "d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:38:01.261458  716742 cri.go:89] found id: ""
	I0904 20:38:01.261468  716742 logs.go:276] 1 containers: [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608]
	I0904 20:38:01.261535  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.265381  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0904 20:38:01.265456  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0904 20:38:01.304546  716742 cri.go:89] found id: "13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:38:01.304570  716742 cri.go:89] found id: ""
	I0904 20:38:01.304578  716742 logs.go:276] 1 containers: [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5]
	I0904 20:38:01.304635  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.308267  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0904 20:38:01.308344  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0904 20:38:01.348771  716742 cri.go:89] found id: "7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:38:01.348801  716742 cri.go:89] found id: ""
	I0904 20:38:01.348811  716742 logs.go:276] 1 containers: [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743]
	I0904 20:38:01.348873  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.353679  716742 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0904 20:38:01.353756  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0904 20:38:01.395097  716742 cri.go:89] found id: "508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:38:01.395119  716742 cri.go:89] found id: ""
	I0904 20:38:01.395127  716742 logs.go:276] 1 containers: [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706]
	I0904 20:38:01.395200  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:01.399164  716742 logs.go:123] Gathering logs for CRI-O ...
	I0904 20:38:01.399196  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0904 20:38:01.498148  716742 logs.go:123] Gathering logs for container status ...
	I0904 20:38:01.498186  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 20:38:01.560705  716742 logs.go:123] Gathering logs for kubelet ...
	I0904 20:38:01.560738  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0904 20:38:01.587961  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:38:01.588204  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:38:01.651595  716742 logs.go:123] Gathering logs for kube-apiserver [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59] ...
	I0904 20:38:01.651629  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:38:01.724603  716742 logs.go:123] Gathering logs for kube-scheduler [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608] ...
	I0904 20:38:01.724637  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:38:01.785389  716742 logs.go:123] Gathering logs for kindnet [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706] ...
	I0904 20:38:01.785426  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:38:01.826786  716742 logs.go:123] Gathering logs for kube-proxy [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5] ...
	I0904 20:38:01.826821  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:38:01.866479  716742 logs.go:123] Gathering logs for kube-controller-manager [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743] ...
	I0904 20:38:01.866509  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:38:01.938042  716742 logs.go:123] Gathering logs for dmesg ...
	I0904 20:38:01.938147  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 20:38:01.964182  716742 logs.go:123] Gathering logs for describe nodes ...
	I0904 20:38:01.964208  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 20:38:02.154346  716742 logs.go:123] Gathering logs for etcd [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971] ...
	I0904 20:38:02.154476  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:38:02.217734  716742 logs.go:123] Gathering logs for coredns [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b] ...
	I0904 20:38:02.217780  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:38:02.287714  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:38:02.287744  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0904 20:38:02.287830  716742 out.go:270] X Problems detected in kubelet:
	W0904 20:38:02.287844  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:38:02.287878  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:38:02.287888  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:38:02.287901  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:38:12.289049  716742 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 20:38:12.297611  716742 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0904 20:38:12.298713  716742 api_server.go:141] control plane version: v1.31.0
	I0904 20:38:12.298744  716742 api_server.go:131] duration metric: took 11.230244619s to wait for apiserver health ...
	I0904 20:38:12.298754  716742 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 20:38:12.298777  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0904 20:38:12.298845  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0904 20:38:12.341281  716742 cri.go:89] found id: "8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:38:12.341303  716742 cri.go:89] found id: ""
	I0904 20:38:12.341311  716742 logs.go:276] 1 containers: [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59]
	I0904 20:38:12.341369  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.345210  716742 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0904 20:38:12.345295  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0904 20:38:12.384841  716742 cri.go:89] found id: "4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:38:12.384863  716742 cri.go:89] found id: ""
	I0904 20:38:12.384871  716742 logs.go:276] 1 containers: [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971]
	I0904 20:38:12.384934  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.388824  716742 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0904 20:38:12.388897  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0904 20:38:12.432322  716742 cri.go:89] found id: "2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:38:12.432344  716742 cri.go:89] found id: ""
	I0904 20:38:12.432352  716742 logs.go:276] 1 containers: [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b]
	I0904 20:38:12.432410  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.436102  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0904 20:38:12.436180  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0904 20:38:12.474996  716742 cri.go:89] found id: "d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:38:12.475018  716742 cri.go:89] found id: ""
	I0904 20:38:12.475025  716742 logs.go:276] 1 containers: [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608]
	I0904 20:38:12.475087  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.478648  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0904 20:38:12.478726  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0904 20:38:12.522943  716742 cri.go:89] found id: "13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:38:12.523008  716742 cri.go:89] found id: ""
	I0904 20:38:12.523022  716742 logs.go:276] 1 containers: [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5]
	I0904 20:38:12.523085  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.526855  716742 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0904 20:38:12.526930  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0904 20:38:12.577119  716742 cri.go:89] found id: "7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:38:12.577153  716742 cri.go:89] found id: ""
	I0904 20:38:12.577190  716742 logs.go:276] 1 containers: [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743]
	I0904 20:38:12.577249  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.580701  716742 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0904 20:38:12.580774  716742 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0904 20:38:12.624944  716742 cri.go:89] found id: "508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:38:12.624967  716742 cri.go:89] found id: ""
	I0904 20:38:12.624975  716742 logs.go:276] 1 containers: [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706]
	I0904 20:38:12.625035  716742 ssh_runner.go:195] Run: which crictl
	I0904 20:38:12.628574  716742 logs.go:123] Gathering logs for kindnet [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706] ...
	I0904 20:38:12.628599  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706"
	I0904 20:38:12.672932  716742 logs.go:123] Gathering logs for dmesg ...
	I0904 20:38:12.672968  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 20:38:12.691130  716742 logs.go:123] Gathering logs for etcd [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971] ...
	I0904 20:38:12.691159  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971"
	I0904 20:38:12.746973  716742 logs.go:123] Gathering logs for kube-scheduler [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608] ...
	I0904 20:38:12.747054  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608"
	I0904 20:38:12.807676  716742 logs.go:123] Gathering logs for kube-proxy [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5] ...
	I0904 20:38:12.807724  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5"
	I0904 20:38:12.855232  716742 logs.go:123] Gathering logs for kube-controller-manager [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743] ...
	I0904 20:38:12.855264  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743"
	I0904 20:38:12.929481  716742 logs.go:123] Gathering logs for CRI-O ...
	I0904 20:38:12.929521  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0904 20:38:13.034293  716742 logs.go:123] Gathering logs for container status ...
	I0904 20:38:13.034341  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 20:38:13.089651  716742 logs.go:123] Gathering logs for kubelet ...
	I0904 20:38:13.089682  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0904 20:38:13.118436  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:38:13.118683  716742 logs.go:138] Found kubelet problem: Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:38:13.184420  716742 logs.go:123] Gathering logs for describe nodes ...
	I0904 20:38:13.184459  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 20:38:13.328501  716742 logs.go:123] Gathering logs for kube-apiserver [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59] ...
	I0904 20:38:13.328532  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59"
	I0904 20:38:13.382614  716742 logs.go:123] Gathering logs for coredns [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b] ...
	I0904 20:38:13.382650  716742 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b"
	I0904 20:38:13.450831  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:38:13.450864  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0904 20:38:13.450946  716742 out.go:270] X Problems detected in kubelet:
	W0904 20:38:13.450959  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: W0904 20:36:08.699328    1510 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-057989" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-057989' and this object
	W0904 20:38:13.450989  716742 out.go:270]   Sep 04 20:36:08 addons-057989 kubelet[1510]: E0904 20:36:08.699385    1510 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-057989\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-057989' and this object" logger="UnhandledError"
	I0904 20:38:13.450998  716742 out.go:358] Setting ErrFile to fd 2...
	I0904 20:38:13.451010  716742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:38:23.464733  716742 system_pods.go:59] 18 kube-system pods found
	I0904 20:38:23.464785  716742 system_pods.go:61] "coredns-6f6b679f8f-k9k5f" [275ab65d-8cdd-4e33-9a30-8e2dea82c08e] Running
	I0904 20:38:23.464791  716742 system_pods.go:61] "csi-hostpath-attacher-0" [415f2771-f4e0-4711-90b4-bbb3cd155351] Running
	I0904 20:38:23.464798  716742 system_pods.go:61] "csi-hostpath-resizer-0" [fcf10418-cc7b-4979-851d-4f6623df5536] Running
	I0904 20:38:23.464803  716742 system_pods.go:61] "csi-hostpathplugin-mn9qp" [0f3278f5-fc14-4f5d-a426-c25a64816e1c] Running
	I0904 20:38:23.464835  716742 system_pods.go:61] "etcd-addons-057989" [e11680b5-b6b4-44d1-bd13-f62d154e2a01] Running
	I0904 20:38:23.464846  716742 system_pods.go:61] "kindnet-xh95z" [0ad1e90a-ac7c-4bde-a26d-ff3f11c0f743] Running
	I0904 20:38:23.464851  716742 system_pods.go:61] "kube-apiserver-addons-057989" [5aea7959-e9f7-4ddd-8bd2-bac55b04b0c8] Running
	I0904 20:38:23.464856  716742 system_pods.go:61] "kube-controller-manager-addons-057989" [f9bdc1cc-474c-40cf-b9a6-04857fd1dcaf] Running
	I0904 20:38:23.464861  716742 system_pods.go:61] "kube-ingress-dns-minikube" [66349fc9-7ad4-480d-b82b-7fb460b850a2] Running
	I0904 20:38:23.464875  716742 system_pods.go:61] "kube-proxy-nc7jl" [43662cab-76d9-4759-9d5b-6f8c245fa417] Running
	I0904 20:38:23.464880  716742 system_pods.go:61] "kube-scheduler-addons-057989" [66094b5b-4131-480b-aff3-4f9187b9afa4] Running
	I0904 20:38:23.464885  716742 system_pods.go:61] "metrics-server-84c5f94fbc-fq2ps" [42462678-f110-4415-b2f1-367217f8c8a2] Running
	I0904 20:38:23.464903  716742 system_pods.go:61] "nvidia-device-plugin-daemonset-hxn5k" [e2ce6825-b8bf-4d5a-a77f-337ca9cd2e60] Running
	I0904 20:38:23.464907  716742 system_pods.go:61] "registry-6fb4cdfc84-q2v5x" [08b3698e-ab89-4393-846c-c4d5984ebe9e] Running
	I0904 20:38:23.464911  716742 system_pods.go:61] "registry-proxy-xfn95" [19eda952-0370-4c89-ad9f-fa2fcf34e855] Running
	I0904 20:38:23.464915  716742 system_pods.go:61] "snapshot-controller-56fcc65765-2nr7v" [e1ed8e39-dd7b-4cfb-bf3e-3ba5331286b1] Running
	I0904 20:38:23.464922  716742 system_pods.go:61] "snapshot-controller-56fcc65765-tcz8s" [16aa5513-c8b9-4e3b-9c63-2b9d9c64ef30] Running
	I0904 20:38:23.464927  716742 system_pods.go:61] "storage-provisioner" [12d1bdba-0302-4966-8175-e7542a9ae817] Running
	I0904 20:38:23.464937  716742 system_pods.go:74] duration metric: took 11.166175842s to wait for pod list to return data ...
	I0904 20:38:23.464949  716742 default_sa.go:34] waiting for default service account to be created ...
	I0904 20:38:23.467768  716742 default_sa.go:45] found service account: "default"
	I0904 20:38:23.467802  716742 default_sa.go:55] duration metric: took 2.843632ms for default service account to be created ...
	I0904 20:38:23.467813  716742 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 20:38:23.479242  716742 system_pods.go:86] 18 kube-system pods found
	I0904 20:38:23.479286  716742 system_pods.go:89] "coredns-6f6b679f8f-k9k5f" [275ab65d-8cdd-4e33-9a30-8e2dea82c08e] Running
	I0904 20:38:23.479295  716742 system_pods.go:89] "csi-hostpath-attacher-0" [415f2771-f4e0-4711-90b4-bbb3cd155351] Running
	I0904 20:38:23.479301  716742 system_pods.go:89] "csi-hostpath-resizer-0" [fcf10418-cc7b-4979-851d-4f6623df5536] Running
	I0904 20:38:23.479306  716742 system_pods.go:89] "csi-hostpathplugin-mn9qp" [0f3278f5-fc14-4f5d-a426-c25a64816e1c] Running
	I0904 20:38:23.479311  716742 system_pods.go:89] "etcd-addons-057989" [e11680b5-b6b4-44d1-bd13-f62d154e2a01] Running
	I0904 20:38:23.479317  716742 system_pods.go:89] "kindnet-xh95z" [0ad1e90a-ac7c-4bde-a26d-ff3f11c0f743] Running
	I0904 20:38:23.479321  716742 system_pods.go:89] "kube-apiserver-addons-057989" [5aea7959-e9f7-4ddd-8bd2-bac55b04b0c8] Running
	I0904 20:38:23.479332  716742 system_pods.go:89] "kube-controller-manager-addons-057989" [f9bdc1cc-474c-40cf-b9a6-04857fd1dcaf] Running
	I0904 20:38:23.479337  716742 system_pods.go:89] "kube-ingress-dns-minikube" [66349fc9-7ad4-480d-b82b-7fb460b850a2] Running
	I0904 20:38:23.479348  716742 system_pods.go:89] "kube-proxy-nc7jl" [43662cab-76d9-4759-9d5b-6f8c245fa417] Running
	I0904 20:38:23.479353  716742 system_pods.go:89] "kube-scheduler-addons-057989" [66094b5b-4131-480b-aff3-4f9187b9afa4] Running
	I0904 20:38:23.479359  716742 system_pods.go:89] "metrics-server-84c5f94fbc-fq2ps" [42462678-f110-4415-b2f1-367217f8c8a2] Running
	I0904 20:38:23.479367  716742 system_pods.go:89] "nvidia-device-plugin-daemonset-hxn5k" [e2ce6825-b8bf-4d5a-a77f-337ca9cd2e60] Running
	I0904 20:38:23.479371  716742 system_pods.go:89] "registry-6fb4cdfc84-q2v5x" [08b3698e-ab89-4393-846c-c4d5984ebe9e] Running
	I0904 20:38:23.479375  716742 system_pods.go:89] "registry-proxy-xfn95" [19eda952-0370-4c89-ad9f-fa2fcf34e855] Running
	I0904 20:38:23.479384  716742 system_pods.go:89] "snapshot-controller-56fcc65765-2nr7v" [e1ed8e39-dd7b-4cfb-bf3e-3ba5331286b1] Running
	I0904 20:38:23.479388  716742 system_pods.go:89] "snapshot-controller-56fcc65765-tcz8s" [16aa5513-c8b9-4e3b-9c63-2b9d9c64ef30] Running
	I0904 20:38:23.479392  716742 system_pods.go:89] "storage-provisioner" [12d1bdba-0302-4966-8175-e7542a9ae817] Running
	I0904 20:38:23.479403  716742 system_pods.go:126] duration metric: took 11.582438ms to wait for k8s-apps to be running ...
	I0904 20:38:23.479411  716742 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 20:38:23.479471  716742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 20:38:23.492157  716742 system_svc.go:56] duration metric: took 12.73694ms WaitForService to wait for kubelet
	I0904 20:38:23.492198  716742 kubeadm.go:582] duration metric: took 3m1.585223376s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 20:38:23.492221  716742 node_conditions.go:102] verifying NodePressure condition ...
	I0904 20:38:23.495727  716742 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0904 20:38:23.495758  716742 node_conditions.go:123] node cpu capacity is 2
	I0904 20:38:23.495769  716742 node_conditions.go:105] duration metric: took 3.542898ms to run NodePressure ...
	I0904 20:38:23.495782  716742 start.go:241] waiting for startup goroutines ...
	I0904 20:38:23.495790  716742 start.go:246] waiting for cluster config update ...
	I0904 20:38:23.495806  716742 start.go:255] writing updated cluster config ...
	I0904 20:38:23.496108  716742 ssh_runner.go:195] Run: rm -f paused
	I0904 20:38:23.838873  716742 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0904 20:38:23.842549  716742 out.go:177] * Done! kubectl is now configured to use "addons-057989" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 20:52:17 addons-057989 crio[964]: time="2024-09-04 20:52:17.177751737Z" level=info msg="Stopped pod sandbox (already stopped): 0dd5920b12b1ab606587ef9b85f633b2240ab2f053c7b36cdb531a97cb6d01bc" id=3ee14e00-89cf-4b77-a725-c79188a8cfca name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:52:17 addons-057989 crio[964]: time="2024-09-04 20:52:17.178117545Z" level=info msg="Removing pod sandbox: 0dd5920b12b1ab606587ef9b85f633b2240ab2f053c7b36cdb531a97cb6d01bc" id=5df34d09-898f-41cc-8a60-505cd69cafdf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:52:17 addons-057989 crio[964]: time="2024-09-04 20:52:17.190680599Z" level=info msg="Removed pod sandbox: 0dd5920b12b1ab606587ef9b85f633b2240ab2f053c7b36cdb531a97cb6d01bc" id=5df34d09-898f-41cc-8a60-505cd69cafdf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:52:17 addons-057989 crio[964]: time="2024-09-04 20:52:17.191269901Z" level=info msg="Stopping pod sandbox: cd0634cc263fe139c1b3628f821af88ad31a70c9ad35dff36fa5282004561619" id=1db14780-43eb-4e6a-923f-9922ac0abfb6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:52:17 addons-057989 crio[964]: time="2024-09-04 20:52:17.191311352Z" level=info msg="Stopped pod sandbox (already stopped): cd0634cc263fe139c1b3628f821af88ad31a70c9ad35dff36fa5282004561619" id=1db14780-43eb-4e6a-923f-9922ac0abfb6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:52:17 addons-057989 crio[964]: time="2024-09-04 20:52:17.191686423Z" level=info msg="Removing pod sandbox: cd0634cc263fe139c1b3628f821af88ad31a70c9ad35dff36fa5282004561619" id=5c6d0a67-0e08-445a-ba3a-223d10069b13 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:52:17 addons-057989 crio[964]: time="2024-09-04 20:52:17.200803355Z" level=info msg="Removed pod sandbox: cd0634cc263fe139c1b3628f821af88ad31a70c9ad35dff36fa5282004561619" id=5c6d0a67-0e08-445a-ba3a-223d10069b13 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:52:17 addons-057989 crio[964]: time="2024-09-04 20:52:17.201295560Z" level=info msg="Stopping pod sandbox: fbaf7f46fd8ef5e5005fdb5847839683085ca941234832489b5e6a909236e3a8" id=073d1860-2cc3-4a8c-a775-a1a363bdb270 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:52:17 addons-057989 crio[964]: time="2024-09-04 20:52:17.201330308Z" level=info msg="Stopped pod sandbox (already stopped): fbaf7f46fd8ef5e5005fdb5847839683085ca941234832489b5e6a909236e3a8" id=073d1860-2cc3-4a8c-a775-a1a363bdb270 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:52:17 addons-057989 crio[964]: time="2024-09-04 20:52:17.201768877Z" level=info msg="Removing pod sandbox: fbaf7f46fd8ef5e5005fdb5847839683085ca941234832489b5e6a909236e3a8" id=70fa8f78-565e-4459-8a18-6fe3ce5da0eb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:52:17 addons-057989 crio[964]: time="2024-09-04 20:52:17.216234543Z" level=info msg="Removed pod sandbox: fbaf7f46fd8ef5e5005fdb5847839683085ca941234832489b5e6a909236e3a8" id=70fa8f78-565e-4459-8a18-6fe3ce5da0eb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 20:52:28 addons-057989 crio[964]: time="2024-09-04 20:52:28.676603417Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7f70088e-6b27-4337-9089-ae8e98acf65b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 20:52:28 addons-057989 crio[964]: time="2024-09-04 20:52:28.676845864Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7f70088e-6b27-4337-9089-ae8e98acf65b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 20:52:43 addons-057989 crio[964]: time="2024-09-04 20:52:43.676261964Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=92f354a4-f758-4839-895f-8350b904e589 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 20:52:43 addons-057989 crio[964]: time="2024-09-04 20:52:43.676488067Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=92f354a4-f758-4839-895f-8350b904e589 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 20:52:58 addons-057989 crio[964]: time="2024-09-04 20:52:58.677045193Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4489dd61-be4b-4e79-9e9d-26a4b1f7ec2b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 20:52:58 addons-057989 crio[964]: time="2024-09-04 20:52:58.677272838Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4489dd61-be4b-4e79-9e9d-26a4b1f7ec2b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 20:53:12 addons-057989 crio[964]: time="2024-09-04 20:53:12.676640086Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d5256dda-7357-4925-a540-ca350b2ea925 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 20:53:12 addons-057989 crio[964]: time="2024-09-04 20:53:12.676876765Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d5256dda-7357-4925-a540-ca350b2ea925 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 20:53:21 addons-057989 crio[964]: time="2024-09-04 20:53:21.542923130Z" level=info msg="Stopping container: b6829e8e31f0784e13196eb81e22ae599f86e6333aba841625acef91f3e4e494 (timeout: 30s)" id=3ca0f17b-8d31-4536-8da9-753838f645e8 name=/runtime.v1.RuntimeService/StopContainer
	Sep 04 20:53:22 addons-057989 crio[964]: time="2024-09-04 20:53:22.738440733Z" level=info msg="Stopped container b6829e8e31f0784e13196eb81e22ae599f86e6333aba841625acef91f3e4e494: kube-system/metrics-server-84c5f94fbc-fq2ps/metrics-server" id=3ca0f17b-8d31-4536-8da9-753838f645e8 name=/runtime.v1.RuntimeService/StopContainer
	Sep 04 20:53:22 addons-057989 crio[964]: time="2024-09-04 20:53:22.739495598Z" level=info msg="Stopping pod sandbox: 0eacdbbb6c58733c81fe22e1daa400651cee4368409daed89b352a81ffc66333" id=1eddb6eb-3e0f-402d-bd72-81cc7c7f6927 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 04 20:53:22 addons-057989 crio[964]: time="2024-09-04 20:53:22.739713488Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-fq2ps Namespace:kube-system ID:0eacdbbb6c58733c81fe22e1daa400651cee4368409daed89b352a81ffc66333 UID:42462678-f110-4415-b2f1-367217f8c8a2 NetNS:/var/run/netns/0e416178-0828-457a-a07b-47feaea0c3d0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 04 20:53:22 addons-057989 crio[964]: time="2024-09-04 20:53:22.739849099Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-fq2ps from CNI network \"kindnet\" (type=ptp)"
	Sep 04 20:53:22 addons-057989 crio[964]: time="2024-09-04 20:53:22.783467160Z" level=info msg="Stopped pod sandbox: 0eacdbbb6c58733c81fe22e1daa400651cee4368409daed89b352a81ffc66333" id=1eddb6eb-3e0f-402d-bd72-81cc7c7f6927 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7279f29d273f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   ba969b211e5a6       hello-world-app-55bf9c44b4-pdmkb
	6082ed4240ccb       docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6                         5 minutes ago       Running             nginx                     0                   00d8643d081a3       nginx
	17ccab4a15b48       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            16 minutes ago      Running             gcp-auth                  0                   c1f5e9bf12177       gcp-auth-89d5ffd79-cxk4z
	b6829e8e31f07       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   16 minutes ago      Exited              metrics-server            0                   0eacdbbb6c587       metrics-server-84c5f94fbc-fq2ps
	1020fa8b2d129       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        17 minutes ago      Running             storage-provisioner       0                   13409a452e461       storage-provisioner
	2da0c2547a33e       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        17 minutes ago      Running             coredns                   0                   bb705bdc5c322       coredns-6f6b679f8f-k9k5f
	508bb2db26ab2       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                      17 minutes ago      Running             kindnet-cni               0                   7d8b3855e8eb9       kindnet-xh95z
	13931a0aa1133       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                        17 minutes ago      Running             kube-proxy                0                   f81a1c946ebc8       kube-proxy-nc7jl
	8926a3a460f5f       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                        18 minutes ago      Running             kube-apiserver            0                   092c186577491       kube-apiserver-addons-057989
	4b86be5e13ac3       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        18 minutes ago      Running             etcd                      0                   ee74187eabd31       etcd-addons-057989
	d659a50021dfa       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                        18 minutes ago      Running             kube-scheduler            0                   5bafee131ac20       kube-scheduler-addons-057989
	7276ded69a4bd       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                        18 minutes ago      Running             kube-controller-manager   0                   fa8a52afc7812       kube-controller-manager-addons-057989
	
	
	==> coredns [2da0c2547a33e0c2c6c4c4b539dd8a5498f9931c72ac318e05d62c3b256e442b] <==
	[INFO] 10.244.0.4:43230 - 36266 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080696s
	[INFO] 10.244.0.4:54625 - 20507 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002721352s
	[INFO] 10.244.0.4:54625 - 17636 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002899991s
	[INFO] 10.244.0.4:47752 - 7177 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069036s
	[INFO] 10.244.0.4:47752 - 24629 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101462s
	[INFO] 10.244.0.4:45701 - 37302 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000101938s
	[INFO] 10.244.0.4:45701 - 44725 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000049492s
	[INFO] 10.244.0.4:41451 - 17255 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000054743s
	[INFO] 10.244.0.4:41451 - 52577 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004708s
	[INFO] 10.244.0.4:58781 - 44362 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045061s
	[INFO] 10.244.0.4:58781 - 5196 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000714s
	[INFO] 10.244.0.4:33457 - 46149 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001594296s
	[INFO] 10.244.0.4:33457 - 859 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001866158s
	[INFO] 10.244.0.4:53802 - 33736 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000056253s
	[INFO] 10.244.0.4:53802 - 30774 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077364s
	[INFO] 10.244.0.20:37769 - 193 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000226473s
	[INFO] 10.244.0.20:34803 - 4301 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000135128s
	[INFO] 10.244.0.20:58659 - 7520 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152554s
	[INFO] 10.244.0.20:36650 - 49243 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000080572s
	[INFO] 10.244.0.20:38727 - 6956 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000194687s
	[INFO] 10.244.0.20:48234 - 885 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104121s
	[INFO] 10.244.0.20:43383 - 57780 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002078142s
	[INFO] 10.244.0.20:59508 - 59382 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002994471s
	[INFO] 10.244.0.20:37033 - 24816 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002191148s
	[INFO] 10.244.0.20:33558 - 37651 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.0021591s
	
	
	==> describe nodes <==
	Name:               addons-057989
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-057989
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af
	                    minikube.k8s.io/name=addons-057989
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_04T20_35_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-057989
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Sep 2024 20:35:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-057989
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Sep 2024 20:53:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Sep 2024 20:51:56 +0000   Wed, 04 Sep 2024 20:35:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Sep 2024 20:51:56 +0000   Wed, 04 Sep 2024 20:35:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Sep 2024 20:51:56 +0000   Wed, 04 Sep 2024 20:35:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Sep 2024 20:51:56 +0000   Wed, 04 Sep 2024 20:36:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-057989
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 21d608e1e5814ff9b34c3cb1cfdf5bda
	  System UUID:                19e6588e-4dc5-4438-9acf-c7fa25e5848f
	  Boot ID:                    02fc5889-82d8-42f6-b649-9c13bcf74bdb
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-pdmkb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  gcp-auth                    gcp-auth-89d5ffd79-cxk4z                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-6f6b679f8f-k9k5f                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     18m
	  kube-system                 etcd-addons-057989                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         18m
	  kube-system                 kindnet-xh95z                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-addons-057989             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-addons-057989    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-nc7jl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-addons-057989             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node addons-057989 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node addons-057989 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node addons-057989 status is now: NodeHasSufficientPID
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  18m                kubelet          Node addons-057989 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m                kubelet          Node addons-057989 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m                kubelet          Node addons-057989 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           18m                node-controller  Node addons-057989 event: Registered Node addons-057989 in Controller
	  Normal   NodeReady                17m                kubelet          Node addons-057989 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 4 20:07] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[Sep 4 20:31] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [4b86be5e13ac3c9e21754929882d787bcd75e6f7be5a7e634a621f8d45ef7971] <==
	{"level":"warn","ts":"2024-09-04T20:35:25.581489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.959743ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.687455Z","caller":"traceutil/trace.go:171","msg":"trace[1976798533] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:388; }","duration":"253.908912ms","start":"2024-09-04T20:35:25.433511Z","end":"2024-09-04T20:35:25.687420Z","steps":["trace[1976798533] 'range keys from in-memory index tree'  (duration: 147.878228ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-04T20:35:25.691719Z","caller":"traceutil/trace.go:171","msg":"trace[369956328] linearizableReadLoop","detail":"{readStateIndex:398; appliedIndex:398; }","duration":"110.347393ms","start":"2024-09-04T20:35:25.581351Z","end":"2024-09-04T20:35:25.691698Z","steps":["trace[369956328] 'read index received'  (duration: 110.342412ms)","trace[369956328] 'applied index is now lower than readState.Index'  (duration: 3.881µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:25.694290Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.921597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.707065Z","caller":"traceutil/trace.go:171","msg":"trace[1332958654] range","detail":"{range_begin:/registry/clusterroles/minikube-ingress-dns; range_end:; response_count:0; response_revision:388; }","duration":"125.695984ms","start":"2024-09-04T20:35:25.581345Z","end":"2024-09-04T20:35:25.707041Z","steps":["trace[1332958654] 'agreement among raft nodes before linearized reading'  (duration: 112.43803ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T20:35:25.884853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.044659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.885401Z","caller":"traceutil/trace.go:171","msg":"trace[1259821602] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:0; response_revision:397; }","duration":"130.606ms","start":"2024-09-04T20:35:25.754776Z","end":"2024-09-04T20:35:25.885382Z","steps":["trace[1259821602] 'agreement among raft nodes before linearized reading'  (duration: 53.794706ms)","trace[1259821602] 'range keys from in-memory index tree'  (duration: 76.238145ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:25.885753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.982311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.888311Z","caller":"traceutil/trace.go:171","msg":"trace[778996107] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:0; response_revision:397; }","duration":"133.536021ms","start":"2024-09-04T20:35:25.754756Z","end":"2024-09-04T20:35:25.888292Z","steps":["trace[778996107] 'agreement among raft nodes before linearized reading'  (duration: 53.824646ms)","trace[778996107] 'range keys from in-memory index tree'  (duration: 77.148805ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:25.885790Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.04526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:25.895290Z","caller":"traceutil/trace.go:171","msg":"trace[850430809] range","detail":"{range_begin:/registry/services/specs/kube-system/registry; range_end:; response_count:0; response_revision:397; }","duration":"140.533663ms","start":"2024-09-04T20:35:25.754733Z","end":"2024-09-04T20:35:25.895267Z","steps":["trace[850430809] 'agreement among raft nodes before linearized reading'  (duration: 53.853651ms)","trace[850430809] 'range keys from in-memory index tree'  (duration: 77.186547ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:26.954061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.583009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3455"}
	{"level":"info","ts":"2024-09-04T20:35:26.954130Z","caller":"traceutil/trace.go:171","msg":"trace[1772008450] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:444; }","duration":"167.663204ms","start":"2024-09-04T20:35:26.786452Z","end":"2024-09-04T20:35:26.954115Z","steps":["trace[1772008450] 'agreement among raft nodes before linearized reading'  (duration: 111.077805ms)","trace[1772008450] 'range keys from in-memory index tree'  (duration: 56.418272ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:26.961722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.697616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/yakd-dashboard/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:26.961805Z","caller":"traceutil/trace.go:171","msg":"trace[2102871173] range","detail":"{range_begin:/registry/serviceaccounts/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:444; }","duration":"175.792769ms","start":"2024-09-04T20:35:26.785998Z","end":"2024-09-04T20:35:26.961790Z","steps":["trace[2102871173] 'agreement among raft nodes before linearized reading'  (duration: 111.129365ms)","trace[2102871173] 'range keys from in-memory index tree'  (duration: 64.517372ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:26.962247Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.843828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4096"}
	{"level":"info","ts":"2024-09-04T20:35:26.962291Z","caller":"traceutil/trace.go:171","msg":"trace[2095094196] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:444; }","duration":"175.943485ms","start":"2024-09-04T20:35:26.786339Z","end":"2024-09-04T20:35:26.962282Z","steps":["trace[2095094196] 'agreement among raft nodes before linearized reading'  (duration: 111.201643ms)","trace[2095094196] 'range keys from in-memory index tree'  (duration: 64.583528ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T20:35:26.962505Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.470957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/gadget/gadget\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T20:35:26.962550Z","caller":"traceutil/trace.go:171","msg":"trace[353857431] range","detail":"{range_begin:/registry/serviceaccounts/gadget/gadget; range_end:; response_count:0; response_revision:444; }","duration":"176.513795ms","start":"2024-09-04T20:35:26.786026Z","end":"2024-09-04T20:35:26.962539Z","steps":["trace[353857431] 'agreement among raft nodes before linearized reading'  (duration: 111.527379ms)","trace[353857431] 'range keys from in-memory index tree'  (duration: 64.934536ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-04T20:45:12.338154Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1560}
	{"level":"info","ts":"2024-09-04T20:45:12.368220Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1560,"took":"29.612622ms","hash":3263472688,"current-db-size-bytes":6590464,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3416064,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-04T20:45:12.368273Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3263472688,"revision":1560,"compact-revision":-1}
	{"level":"info","ts":"2024-09-04T20:50:12.347485Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1979}
	{"level":"info","ts":"2024-09-04T20:50:12.372012Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1979,"took":"23.716274ms","hash":3109541245,"current-db-size-bytes":6590464,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3985408,"current-db-size-in-use":"4.0 MB"}
	{"level":"info","ts":"2024-09-04T20:50:12.372473Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3109541245,"revision":1979,"compact-revision":1560}
	
	
	==> gcp-auth [17ccab4a15b4833c23b0926aecd04a59266538dd48181e9afa4051fa2ef4c952] <==
	2024/09/04 20:38:24 Ready to write response ...
	2024/09/04 20:38:24 Ready to marshal response ...
	2024/09/04 20:38:24 Ready to write response ...
	2024/09/04 20:46:32 Ready to marshal response ...
	2024/09/04 20:46:32 Ready to write response ...
	2024/09/04 20:46:37 Ready to marshal response ...
	2024/09/04 20:46:37 Ready to write response ...
	2024/09/04 20:47:04 Ready to marshal response ...
	2024/09/04 20:47:04 Ready to write response ...
	2024/09/04 20:47:52 Ready to marshal response ...
	2024/09/04 20:47:52 Ready to write response ...
	2024/09/04 20:50:10 Ready to marshal response ...
	2024/09/04 20:50:10 Ready to write response ...
	2024/09/04 20:50:23 Ready to marshal response ...
	2024/09/04 20:50:23 Ready to write response ...
	2024/09/04 20:50:23 Ready to marshal response ...
	2024/09/04 20:50:23 Ready to write response ...
	2024/09/04 20:50:30 Ready to marshal response ...
	2024/09/04 20:50:30 Ready to write response ...
	2024/09/04 20:51:39 Ready to marshal response ...
	2024/09/04 20:51:39 Ready to write response ...
	2024/09/04 20:51:39 Ready to marshal response ...
	2024/09/04 20:51:39 Ready to write response ...
	2024/09/04 20:51:39 Ready to marshal response ...
	2024/09/04 20:51:39 Ready to write response ...
	
	
	==> kernel <==
	 20:53:23 up  4:35,  0 users,  load average: 0.30, 0.47, 1.07
	Linux addons-057989 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [508bb2db26ab22cd4c3686e88c9758199071fd368766566a33011337ab014706] <==
	I0904 20:51:18.289491       1 main.go:299] handling current node
	I0904 20:51:28.287452       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:51:28.287490       1 main.go:299] handling current node
	I0904 20:51:38.287864       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:51:38.287956       1 main.go:299] handling current node
	I0904 20:51:48.286790       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:51:48.286826       1 main.go:299] handling current node
	I0904 20:51:58.287534       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:51:58.287571       1 main.go:299] handling current node
	I0904 20:52:08.288688       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:52:08.288732       1 main.go:299] handling current node
	I0904 20:52:18.290804       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:52:18.290839       1 main.go:299] handling current node
	I0904 20:52:28.286984       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:52:28.287131       1 main.go:299] handling current node
	I0904 20:52:38.290407       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:52:38.290443       1 main.go:299] handling current node
	I0904 20:52:48.286506       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:52:48.286544       1 main.go:299] handling current node
	I0904 20:52:58.287221       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:52:58.287254       1 main.go:299] handling current node
	I0904 20:53:08.286465       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:53:08.286594       1 main.go:299] handling current node
	I0904 20:53:18.295974       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 20:53:18.296009       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8926a3a460f5f9f25d956008667a84daecb2b19ef2f81d569cea19b029936c59] <==
	E0904 20:37:49.387183       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.223.191:443: connect: connection refused" logger="UnhandledError"
	E0904 20:37:49.390494       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.223.191:443: connect: connection refused" logger="UnhandledError"
	E0904 20:37:49.395595       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.223.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.223.191:443: connect: connection refused" logger="UnhandledError"
	I0904 20:37:49.490854       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0904 20:46:44.207537       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0904 20:47:20.716830       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.716996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:47:20.747162       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.747299       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:47:20.757218       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.757301       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:47:20.799045       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.799887       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 20:47:20.853654       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 20:47:20.853691       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0904 20:47:21.799451       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0904 20:47:21.854331       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0904 20:47:21.948281       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0904 20:47:47.023165       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0904 20:47:48.067539       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0904 20:47:52.642386       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0904 20:47:52.941344       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.35.92"}
	I0904 20:50:10.969869       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.199.63"}
	E0904 20:50:46.806921       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0904 20:51:39.919308       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.175.108"}
	
	
	==> kube-controller-manager [7276ded69a4bd924b2f7b5e8206559df2b40ffe05e27daa191043ca4589e5743] <==
	I0904 20:51:44.534668       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="42.23µs"
	I0904 20:51:50.590285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="4.299µs"
	W0904 20:51:55.254295       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:51:55.254342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0904 20:51:56.076613       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-057989"
	I0904 20:52:00.696128       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W0904 20:52:08.897587       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:52:08.897630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:52:11.633346       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:52:11.633389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:52:13.889546       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:52:13.889590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:52:34.581525       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:52:34.581570       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:52:41.727990       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:52:41.728036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:53:01.327482       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:53:01.327525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:53:06.895851       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:53:06.895895       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:53:14.183425       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:53:14.183468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 20:53:21.216422       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 20:53:21.216465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0904 20:53:21.515537       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="4.169µs"
	
	
	==> kube-proxy [13931a0aa1133b783bab0254a55ee0610a97a03fc3e11121d3c36fb2fdd0d4d5] <==
	I0904 20:35:27.820676       1 server_linux.go:66] "Using iptables proxy"
	I0904 20:35:28.601961       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0904 20:35:28.602048       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 20:35:28.838061       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 20:35:28.838226       1 server_linux.go:169] "Using iptables Proxier"
	I0904 20:35:28.840297       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 20:35:28.840905       1 server.go:483] "Version info" version="v1.31.0"
	I0904 20:35:28.840973       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 20:35:28.843573       1 config.go:197] "Starting service config controller"
	I0904 20:35:28.843687       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0904 20:35:28.881197       1 config.go:104] "Starting endpoint slice config controller"
	I0904 20:35:28.881320       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0904 20:35:28.883028       1 config.go:326] "Starting node config controller"
	I0904 20:35:28.883115       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0904 20:35:28.997291       1 shared_informer.go:320] Caches are synced for node config
	I0904 20:35:29.021198       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0904 20:35:29.044954       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [d659a50021dfa9b786dbb59e2bfb694fff22198101377fd9338c8cd2fe8ae608] <==
	W0904 20:35:14.275096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0904 20:35:14.275232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:14.275216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0904 20:35:14.275327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.102647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0904 20:35:15.102790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.146211       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0904 20:35:15.146262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.151331       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0904 20:35:15.151486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.194181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0904 20:35:15.194327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.218850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0904 20:35:15.218973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.252691       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0904 20:35:15.252825       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0904 20:35:15.348686       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0904 20:35:15.348826       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.376639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0904 20:35:15.376765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.392542       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0904 20:35:15.392666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 20:35:15.425197       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0904 20:35:15.425318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0904 20:35:17.567935       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 04 20:52:14 addons-057989 kubelet[1510]: E0904 20:52:14.677228    1510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c44deaee-b000-42d4-a04d-514ae8c98a8a"
	Sep 04 20:52:17 addons-057989 kubelet[1510]: E0904 20:52:17.127325    1510 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483137126963537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:52:17 addons-057989 kubelet[1510]: E0904 20:52:17.127365    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483137126963537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:52:27 addons-057989 kubelet[1510]: E0904 20:52:27.130068    1510 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483147129761891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:52:27 addons-057989 kubelet[1510]: E0904 20:52:27.130106    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483147129761891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:52:28 addons-057989 kubelet[1510]: E0904 20:52:28.677516    1510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c44deaee-b000-42d4-a04d-514ae8c98a8a"
	Sep 04 20:52:37 addons-057989 kubelet[1510]: E0904 20:52:37.132672    1510 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483157132415320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:52:37 addons-057989 kubelet[1510]: E0904 20:52:37.132711    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483157132415320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:52:43 addons-057989 kubelet[1510]: E0904 20:52:43.676785    1510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c44deaee-b000-42d4-a04d-514ae8c98a8a"
	Sep 04 20:52:47 addons-057989 kubelet[1510]: E0904 20:52:47.135477    1510 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483167135224955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:52:47 addons-057989 kubelet[1510]: E0904 20:52:47.135540    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483167135224955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:52:57 addons-057989 kubelet[1510]: E0904 20:52:57.138154    1510 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483177137891655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:52:57 addons-057989 kubelet[1510]: E0904 20:52:57.138193    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483177137891655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:52:58 addons-057989 kubelet[1510]: E0904 20:52:58.677793    1510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c44deaee-b000-42d4-a04d-514ae8c98a8a"
	Sep 04 20:53:07 addons-057989 kubelet[1510]: E0904 20:53:07.141351    1510 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483187141091639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:53:07 addons-057989 kubelet[1510]: E0904 20:53:07.141391    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483187141091639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:53:12 addons-057989 kubelet[1510]: E0904 20:53:12.677241    1510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c44deaee-b000-42d4-a04d-514ae8c98a8a"
	Sep 04 20:53:17 addons-057989 kubelet[1510]: E0904 20:53:17.143973    1510 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483197143728823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:53:17 addons-057989 kubelet[1510]: E0904 20:53:17.144015    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725483197143728823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582448,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 20:53:22 addons-057989 kubelet[1510]: I0904 20:53:22.915341    1510 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4qwh\" (UniqueName: \"kubernetes.io/projected/42462678-f110-4415-b2f1-367217f8c8a2-kube-api-access-t4qwh\") pod \"42462678-f110-4415-b2f1-367217f8c8a2\" (UID: \"42462678-f110-4415-b2f1-367217f8c8a2\") "
	Sep 04 20:53:22 addons-057989 kubelet[1510]: I0904 20:53:22.915402    1510 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/42462678-f110-4415-b2f1-367217f8c8a2-tmp-dir\") pod \"42462678-f110-4415-b2f1-367217f8c8a2\" (UID: \"42462678-f110-4415-b2f1-367217f8c8a2\") "
	Sep 04 20:53:22 addons-057989 kubelet[1510]: I0904 20:53:22.918358    1510 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42462678-f110-4415-b2f1-367217f8c8a2-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "42462678-f110-4415-b2f1-367217f8c8a2" (UID: "42462678-f110-4415-b2f1-367217f8c8a2"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 04 20:53:22 addons-057989 kubelet[1510]: I0904 20:53:22.924792    1510 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42462678-f110-4415-b2f1-367217f8c8a2-kube-api-access-t4qwh" (OuterVolumeSpecName: "kube-api-access-t4qwh") pod "42462678-f110-4415-b2f1-367217f8c8a2" (UID: "42462678-f110-4415-b2f1-367217f8c8a2"). InnerVolumeSpecName "kube-api-access-t4qwh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 04 20:53:23 addons-057989 kubelet[1510]: I0904 20:53:23.016376    1510 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t4qwh\" (UniqueName: \"kubernetes.io/projected/42462678-f110-4415-b2f1-367217f8c8a2-kube-api-access-t4qwh\") on node \"addons-057989\" DevicePath \"\""
	Sep 04 20:53:23 addons-057989 kubelet[1510]: I0904 20:53:23.016449    1510 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/42462678-f110-4415-b2f1-367217f8c8a2-tmp-dir\") on node \"addons-057989\" DevicePath \"\""
	
	
	==> storage-provisioner [1020fa8b2d129b2c1528e8263e44e0614430ad1edde0adfc959a0b0cead5e677] <==
	I0904 20:36:09.670277       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 20:36:09.684669       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 20:36:09.684712       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0904 20:36:09.692410       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0904 20:36:09.692859       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-057989_59879aa7-61cd-4c49-a7f4-85b770d0ea1d!
	I0904 20:36:09.694887       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"311a2453-e39f-4619-9aa1-2dcff1946c80", APIVersion:"v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-057989_59879aa7-61cd-4c49-a7f4-85b770d0ea1d became leader
	I0904 20:36:09.793462       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-057989_59879aa7-61cd-4c49-a7f4-85b770d0ea1d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-057989 -n addons-057989
helpers_test.go:261: (dbg) Run:  kubectl --context addons-057989 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-057989 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-057989 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-057989/192.168.49.2
	Start Time:       Wed, 04 Sep 2024 20:38:24 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k4dt6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k4dt6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  15m                   default-scheduler  Successfully assigned default/busybox to addons-057989
	  Normal   Pulling    13m (x4 over 15m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     13m (x4 over 15m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     13m (x4 over 15m)     kubelet            Error: ErrImagePull
	  Warning  Failed     13m (x6 over 14m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m50s (x43 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (363.14s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
functional_test.go:2288: (dbg) Non-zero exit: out/minikube-linux-arm64 license: exit status 40 (240.879229ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2289: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (129.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-067477 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0904 21:06:41.019210  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:07:08.724234  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-067477 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m5.238149055s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 status -v=7 --alsologtostderr
E0904 21:08:24.397592  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-067477       NotReady   control-plane   10m     v1.31.0
	ha-067477-m02   Ready      control-plane   9m54s   v1.31.0
	ha-067477-m04   Ready      <none>          7m22s   v1.31.0

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-067477
helpers_test.go:235: (dbg) docker inspect ha-067477:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0afbbbd41dcc0411cc9046cbf16dbbc1d819f583ce6b77f2cade079d3bc44056",
	        "Created": "2024-09-04T20:57:40.511422188Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 777080,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-04T21:06:19.662215345Z",
	            "FinishedAt": "2024-09-04T21:06:18.701215304Z"
	        },
	        "Image": "sha256:8411aacd61cb8f2a7ae48c92e2c9e76ad4dd701b3dba8b30393c5cc31fbd2b15",
	        "ResolvConfPath": "/var/lib/docker/containers/0afbbbd41dcc0411cc9046cbf16dbbc1d819f583ce6b77f2cade079d3bc44056/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0afbbbd41dcc0411cc9046cbf16dbbc1d819f583ce6b77f2cade079d3bc44056/hostname",
	        "HostsPath": "/var/lib/docker/containers/0afbbbd41dcc0411cc9046cbf16dbbc1d819f583ce6b77f2cade079d3bc44056/hosts",
	        "LogPath": "/var/lib/docker/containers/0afbbbd41dcc0411cc9046cbf16dbbc1d819f583ce6b77f2cade079d3bc44056/0afbbbd41dcc0411cc9046cbf16dbbc1d819f583ce6b77f2cade079d3bc44056-json.log",
	        "Name": "/ha-067477",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-067477:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-067477",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/50bd32ec1d225a70cdfe4b70e4b4ae9e3003808ade9651751977dc21ad6770ff-init/diff:/var/lib/docker/overlay2/e164f50a1bfe4541271ed61a6ed23c33b9aae141da805b23620713759476fde0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/50bd32ec1d225a70cdfe4b70e4b4ae9e3003808ade9651751977dc21ad6770ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/50bd32ec1d225a70cdfe4b70e4b4ae9e3003808ade9651751977dc21ad6770ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/50bd32ec1d225a70cdfe4b70e4b4ae9e3003808ade9651751977dc21ad6770ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-067477",
	                "Source": "/var/lib/docker/volumes/ha-067477/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-067477",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-067477",
	                "name.minikube.sigs.k8s.io": "ha-067477",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9b7b3418d1a9ece7f66c499b71fd67ace263ff30817836f66cddfe755d177e8",
	            "SandboxKey": "/var/run/docker/netns/f9b7b3418d1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33589"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33590"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33593"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33591"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33592"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-067477": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d8fe10e7a297138f883b54ec845ceea603995fee73deffac85387f886d0d68ed",
	                    "EndpointID": "372715cbfc90111e99d13bf6f79e44c2a5e72a7731a044d3a61bac48fc5d7ec4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-067477",
	                        "0afbbbd41dcc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-067477 -n ha-067477
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-067477 logs -n 25: (2.00051968s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-067477 cp ha-067477-m03:/home/docker/cp-test.txt                              | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | ha-067477-m04:/home/docker/cp-test_ha-067477-m03_ha-067477-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-067477 ssh -n                                                                 | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | ha-067477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-067477 ssh -n ha-067477-m04 sudo cat                                          | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | /home/docker/cp-test_ha-067477-m03_ha-067477-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-067477 cp testdata/cp-test.txt                                                | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | ha-067477-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-067477 ssh -n                                                                 | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | ha-067477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-067477 cp ha-067477-m04:/home/docker/cp-test.txt                              | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1674057591/001/cp-test_ha-067477-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-067477 ssh -n                                                                 | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | ha-067477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-067477 cp ha-067477-m04:/home/docker/cp-test.txt                              | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | ha-067477:/home/docker/cp-test_ha-067477-m04_ha-067477.txt                       |           |         |         |                     |                     |
	| ssh     | ha-067477 ssh -n                                                                 | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | ha-067477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-067477 ssh -n ha-067477 sudo cat                                              | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | /home/docker/cp-test_ha-067477-m04_ha-067477.txt                                 |           |         |         |                     |                     |
	| cp      | ha-067477 cp ha-067477-m04:/home/docker/cp-test.txt                              | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | ha-067477-m02:/home/docker/cp-test_ha-067477-m04_ha-067477-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-067477 ssh -n                                                                 | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | ha-067477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-067477 ssh -n ha-067477-m02 sudo cat                                          | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | /home/docker/cp-test_ha-067477-m04_ha-067477-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-067477 cp ha-067477-m04:/home/docker/cp-test.txt                              | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | ha-067477-m03:/home/docker/cp-test_ha-067477-m04_ha-067477-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-067477 ssh -n                                                                 | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | ha-067477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-067477 ssh -n ha-067477-m03 sudo cat                                          | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | /home/docker/cp-test_ha-067477-m04_ha-067477-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-067477 node stop m02 -v=7                                                     | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:01 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-067477 node start m02 -v=7                                                    | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:01 UTC | 04 Sep 24 21:02 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-067477 -v=7                                                           | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-067477 -v=7                                                                | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:02 UTC | 04 Sep 24 21:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-067477 --wait=true -v=7                                                    | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:03 UTC | 04 Sep 24 21:05 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-067477                                                                | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:05 UTC |                     |
	| node    | ha-067477 node delete m03 -v=7                                                   | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:05 UTC | 04 Sep 24 21:05 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-067477 stop -v=7                                                              | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:05 UTC | 04 Sep 24 21:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-067477 --wait=true                                                         | ha-067477 | jenkins | v1.34.0 | 04 Sep 24 21:06 UTC | 04 Sep 24 21:08 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 21:06:19
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 21:06:19.130863  776883 out.go:345] Setting OutFile to fd 1 ...
	I0904 21:06:19.131014  776883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 21:06:19.131025  776883 out.go:358] Setting ErrFile to fd 2...
	I0904 21:06:19.131030  776883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 21:06:19.131275  776883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
	I0904 21:06:19.131668  776883 out.go:352] Setting JSON to false
	I0904 21:06:19.132550  776883 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17330,"bootTime":1725466650,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0904 21:06:19.132651  776883 start.go:139] virtualization:  
	I0904 21:06:19.135236  776883 out.go:177] * [ha-067477] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0904 21:06:19.137687  776883 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 21:06:19.137744  776883 notify.go:220] Checking for updates...
	I0904 21:06:19.142083  776883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:06:19.144178  776883 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 21:06:19.146145  776883 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	I0904 21:06:19.148140  776883 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0904 21:06:19.150373  776883 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:06:19.152982  776883 config.go:182] Loaded profile config "ha-067477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:06:19.153606  776883 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 21:06:19.177121  776883 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0904 21:06:19.177225  776883 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:06:19.237148  776883 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:41 SystemTime:2024-09-04 21:06:19.227327952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 21:06:19.237288  776883 docker.go:307] overlay module found
	I0904 21:06:19.239676  776883 out.go:177] * Using the docker driver based on existing profile
	I0904 21:06:19.242106  776883 start.go:297] selected driver: docker
	I0904 21:06:19.242129  776883 start.go:901] validating driver "docker" against &{Name:ha-067477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-067477 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:fa
lse metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:06:19.242268  776883 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:06:19.242365  776883 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:06:19.300086  776883 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:41 SystemTime:2024-09-04 21:06:19.290890832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 21:06:19.300498  776883 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 21:06:19.300534  776883 cni.go:84] Creating CNI manager for ""
	I0904 21:06:19.300547  776883 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0904 21:06:19.300592  776883 start.go:340] cluster config:
	{Name:ha-067477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-067477 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-dri
ver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:06:19.304747  776883 out.go:177] * Starting "ha-067477" primary control-plane node in "ha-067477" cluster
	I0904 21:06:19.307041  776883 cache.go:121] Beginning downloading kic base image for docker with crio
	I0904 21:06:19.309516  776883 out.go:177] * Pulling base image v0.0.45 ...
	I0904 21:06:19.311880  776883 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 21:06:19.311940  776883 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0904 21:06:19.311954  776883 cache.go:56] Caching tarball of preloaded images
	I0904 21:06:19.311968  776883 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0904 21:06:19.312032  776883 preload.go:172] Found /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0904 21:06:19.312042  776883 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0904 21:06:19.312191  776883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/config.json ...
	W0904 21:06:19.330845  776883 image.go:95] image gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 is of wrong architecture
	I0904 21:06:19.330866  776883 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0904 21:06:19.330955  776883 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0904 21:06:19.330980  776883 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0904 21:06:19.330986  776883 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0904 21:06:19.331003  776883 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0904 21:06:19.331009  776883 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from local cache
	I0904 21:06:19.332273  776883 image.go:273] response: 
	I0904 21:06:19.517218  776883 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from cached tarball
	I0904 21:06:19.517256  776883 cache.go:194] Successfully downloaded all kic artifacts
	I0904 21:06:19.517302  776883 start.go:360] acquireMachinesLock for ha-067477: {Name:mk992ede83234d635ffdfc30d9a7600c386ff095 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 21:06:19.517380  776883 start.go:364] duration metric: took 48.967µs to acquireMachinesLock for "ha-067477"
	I0904 21:06:19.517407  776883 start.go:96] Skipping create...Using existing machine configuration
	I0904 21:06:19.517417  776883 fix.go:54] fixHost starting: 
	I0904 21:06:19.517706  776883 cli_runner.go:164] Run: docker container inspect ha-067477 --format={{.State.Status}}
	I0904 21:06:19.534166  776883 fix.go:112] recreateIfNeeded on ha-067477: state=Stopped err=<nil>
	W0904 21:06:19.534195  776883 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 21:06:19.538716  776883 out.go:177] * Restarting existing docker container for "ha-067477" ...
	I0904 21:06:19.541503  776883 cli_runner.go:164] Run: docker start ha-067477
	I0904 21:06:19.827839  776883 cli_runner.go:164] Run: docker container inspect ha-067477 --format={{.State.Status}}
	I0904 21:06:19.849732  776883 kic.go:430] container "ha-067477" state is running.
	I0904 21:06:19.850178  776883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-067477
	I0904 21:06:19.874295  776883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/config.json ...
	I0904 21:06:19.876952  776883 machine.go:93] provisionDockerMachine start ...
	I0904 21:06:19.877169  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477
	I0904 21:06:19.903174  776883 main.go:141] libmachine: Using SSH client type: native
	I0904 21:06:19.903597  776883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33589 <nil> <nil>}
	I0904 21:06:19.903614  776883 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 21:06:19.904918  776883 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0904 21:06:23.025934  776883 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-067477
	
	I0904 21:06:23.025961  776883 ubuntu.go:169] provisioning hostname "ha-067477"
	I0904 21:06:23.026033  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477
	I0904 21:06:23.044749  776883 main.go:141] libmachine: Using SSH client type: native
	I0904 21:06:23.045018  776883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33589 <nil> <nil>}
	I0904 21:06:23.045035  776883 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-067477 && echo "ha-067477" | sudo tee /etc/hostname
	I0904 21:06:23.181656  776883 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-067477
	
	I0904 21:06:23.181748  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477
	I0904 21:06:23.199062  776883 main.go:141] libmachine: Using SSH client type: native
	I0904 21:06:23.199331  776883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33589 <nil> <nil>}
	I0904 21:06:23.199353  776883 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-067477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-067477/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-067477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 21:06:23.321930  776883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 21:06:23.321954  776883 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19575-710603/.minikube CaCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19575-710603/.minikube}
	I0904 21:06:23.321974  776883 ubuntu.go:177] setting up certificates
	I0904 21:06:23.321985  776883 provision.go:84] configureAuth start
	I0904 21:06:23.322062  776883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-067477
	I0904 21:06:23.338420  776883 provision.go:143] copyHostCerts
	I0904 21:06:23.338466  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem
	I0904 21:06:23.338502  776883 exec_runner.go:144] found /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem, removing ...
	I0904 21:06:23.338514  776883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem
	I0904 21:06:23.338607  776883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem (1082 bytes)
	I0904 21:06:23.338703  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem
	I0904 21:06:23.338731  776883 exec_runner.go:144] found /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem, removing ...
	I0904 21:06:23.338740  776883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem
	I0904 21:06:23.338769  776883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem (1123 bytes)
	I0904 21:06:23.338817  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem
	I0904 21:06:23.338837  776883 exec_runner.go:144] found /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem, removing ...
	I0904 21:06:23.338844  776883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem
	I0904 21:06:23.338869  776883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem (1675 bytes)
	I0904 21:06:23.338928  776883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem org=jenkins.ha-067477 san=[127.0.0.1 192.168.49.2 ha-067477 localhost minikube]
	I0904 21:06:23.595300  776883 provision.go:177] copyRemoteCerts
	I0904 21:06:23.595372  776883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 21:06:23.595414  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477
	I0904 21:06:23.611693  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33589 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477/id_rsa Username:docker}
	I0904 21:06:23.702877  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0904 21:06:23.702945  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0904 21:06:23.728319  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0904 21:06:23.728410  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0904 21:06:23.754815  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0904 21:06:23.754876  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 21:06:23.779977  776883 provision.go:87] duration metric: took 457.978092ms to configureAuth
	I0904 21:06:23.780006  776883 ubuntu.go:193] setting minikube options for container-runtime
	I0904 21:06:23.780261  776883 config.go:182] Loaded profile config "ha-067477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:06:23.780394  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477
	I0904 21:06:23.798157  776883 main.go:141] libmachine: Using SSH client type: native
	I0904 21:06:23.798411  776883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33589 <nil> <nil>}
	I0904 21:06:23.798430  776883 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 21:06:24.268931  776883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 21:06:24.268997  776883 machine.go:96] duration metric: took 4.392019584s to provisionDockerMachine
	I0904 21:06:24.269023  776883 start.go:293] postStartSetup for "ha-067477" (driver="docker")
	I0904 21:06:24.269050  776883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 21:06:24.269144  776883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 21:06:24.269228  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477
	I0904 21:06:24.290232  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33589 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477/id_rsa Username:docker}
	I0904 21:06:24.382864  776883 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 21:06:24.386050  776883 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 21:06:24.386089  776883 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 21:06:24.386099  776883 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 21:06:24.386107  776883 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0904 21:06:24.386119  776883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-710603/.minikube/addons for local assets ...
	I0904 21:06:24.386184  776883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-710603/.minikube/files for local assets ...
	I0904 21:06:24.386266  776883 filesync.go:149] local asset: /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem -> 7159812.pem in /etc/ssl/certs
	I0904 21:06:24.386277  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem -> /etc/ssl/certs/7159812.pem
	I0904 21:06:24.386380  776883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 21:06:24.395152  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem --> /etc/ssl/certs/7159812.pem (1708 bytes)
	I0904 21:06:24.420227  776883 start.go:296] duration metric: took 151.172211ms for postStartSetup
	I0904 21:06:24.420332  776883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:06:24.420378  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477
	I0904 21:06:24.437439  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33589 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477/id_rsa Username:docker}
	I0904 21:06:24.522940  776883 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 21:06:24.527399  776883 fix.go:56] duration metric: took 5.009975994s for fixHost
	I0904 21:06:24.527469  776883 start.go:83] releasing machines lock for "ha-067477", held for 5.010074141s
	I0904 21:06:24.527593  776883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-067477
	I0904 21:06:24.544250  776883 ssh_runner.go:195] Run: cat /version.json
	I0904 21:06:24.544301  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477
	I0904 21:06:24.544352  776883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 21:06:24.544405  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477
	I0904 21:06:24.564505  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33589 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477/id_rsa Username:docker}
	I0904 21:06:24.565240  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33589 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477/id_rsa Username:docker}
	I0904 21:06:24.789062  776883 ssh_runner.go:195] Run: systemctl --version
	I0904 21:06:24.793565  776883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 21:06:24.936063  776883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 21:06:24.940594  776883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 21:06:24.949674  776883 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 21:06:24.949759  776883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 21:06:24.958658  776883 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 21:06:24.958685  776883 start.go:495] detecting cgroup driver to use...
	I0904 21:06:24.958746  776883 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 21:06:24.958810  776883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 21:06:24.971316  776883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 21:06:24.982987  776883 docker.go:217] disabling cri-docker service (if available) ...
	I0904 21:06:24.983085  776883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 21:06:24.996494  776883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 21:06:25.020652  776883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 21:06:25.128771  776883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 21:06:25.223151  776883 docker.go:233] disabling docker service ...
	I0904 21:06:25.223240  776883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 21:06:25.235721  776883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 21:06:25.246893  776883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 21:06:25.333420  776883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 21:06:25.428668  776883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 21:06:25.441793  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 21:06:25.458667  776883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0904 21:06:25.458732  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:25.468552  776883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 21:06:25.468626  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:25.478936  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:25.489043  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:25.500158  776883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 21:06:25.510722  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:25.521517  776883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:25.532010  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:25.543177  776883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 21:06:25.552294  776883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 21:06:25.561215  776883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:06:25.650798  776883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 21:06:25.760026  776883 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 21:06:25.760102  776883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 21:06:25.763635  776883 start.go:563] Will wait 60s for crictl version
	I0904 21:06:25.763700  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:06:25.767165  776883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 21:06:25.806681  776883 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 21:06:25.806838  776883 ssh_runner.go:195] Run: crio --version
	I0904 21:06:25.855162  776883 ssh_runner.go:195] Run: crio --version
	I0904 21:06:25.896998  776883 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0904 21:06:25.899022  776883 cli_runner.go:164] Run: docker network inspect ha-067477 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 21:06:25.914358  776883 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 21:06:25.918173  776883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 21:06:25.929586  776883 kubeadm.go:883] updating cluster {Name:ha-067477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-067477 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:f
alse metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 21:06:25.929740  776883 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 21:06:25.929805  776883 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 21:06:25.977509  776883 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 21:06:25.977530  776883 crio.go:433] Images already preloaded, skipping extraction
	I0904 21:06:25.977584  776883 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 21:06:26.028682  776883 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 21:06:26.028923  776883 cache_images.go:84] Images are preloaded, skipping loading
	I0904 21:06:26.029003  776883 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0904 21:06:26.029543  776883 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-067477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-067477 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 21:06:26.030091  776883 ssh_runner.go:195] Run: crio config
	I0904 21:06:26.099457  776883 cni.go:84] Creating CNI manager for ""
	I0904 21:06:26.099482  776883 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0904 21:06:26.099493  776883 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 21:06:26.099550  776883 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-067477 NodeName:ha-067477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 21:06:26.099754  776883 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-067477"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 21:06:26.099782  776883 kube-vip.go:115] generating kube-vip config ...
	I0904 21:06:26.099839  776883 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0904 21:06:26.113039  776883 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0904 21:06:26.113152  776883 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0904 21:06:26.113222  776883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0904 21:06:26.122479  776883 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 21:06:26.122549  776883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0904 21:06:26.131473  776883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0904 21:06:26.150316  776883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 21:06:26.169786  776883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0904 21:06:26.188385  776883 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0904 21:06:26.206867  776883 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0904 21:06:26.210473  776883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 21:06:26.221427  776883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:06:26.312611  776883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 21:06:26.327135  776883 certs.go:68] Setting up /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477 for IP: 192.168.49.2
	I0904 21:06:26.327154  776883 certs.go:194] generating shared ca certs ...
	I0904 21:06:26.327170  776883 certs.go:226] acquiring lock for ca certs: {Name:mkc3a04cbc0797b819dd3c9fec2eaef93961640b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:06:26.327306  776883 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key
	I0904 21:06:26.327357  776883 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key
	I0904 21:06:26.327369  776883 certs.go:256] generating profile certs ...
	I0904 21:06:26.327447  776883 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/client.key
	I0904 21:06:26.327488  776883 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.key.07fcaf6e
	I0904 21:06:26.327507  776883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.crt.07fcaf6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0904 21:06:26.618105  776883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.crt.07fcaf6e ...
	I0904 21:06:26.618143  776883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.crt.07fcaf6e: {Name:mkb4f5419c508565ddb731bafdf12b3a3f132602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:06:26.618340  776883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.key.07fcaf6e ...
	I0904 21:06:26.618355  776883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.key.07fcaf6e: {Name:mkb9c2d5a6e719696941900ed1d9619cd44d856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:06:26.618444  776883 certs.go:381] copying /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.crt.07fcaf6e -> /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.crt
	I0904 21:06:26.618595  776883 certs.go:385] copying /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.key.07fcaf6e -> /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.key
	I0904 21:06:26.618733  776883 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/proxy-client.key
	I0904 21:06:26.618752  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0904 21:06:26.618769  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0904 21:06:26.618785  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0904 21:06:26.618801  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0904 21:06:26.618817  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0904 21:06:26.618832  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0904 21:06:26.618848  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0904 21:06:26.618862  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0904 21:06:26.618912  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/715981.pem (1338 bytes)
	W0904 21:06:26.618946  776883 certs.go:480] ignoring /home/jenkins/minikube-integration/19575-710603/.minikube/certs/715981_empty.pem, impossibly tiny 0 bytes
	I0904 21:06:26.618958  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 21:06:26.618985  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem (1082 bytes)
	I0904 21:06:26.619012  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem (1123 bytes)
	I0904 21:06:26.619037  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem (1675 bytes)
	I0904 21:06:26.619081  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem (1708 bytes)
	I0904 21:06:26.619113  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/715981.pem -> /usr/share/ca-certificates/715981.pem
	I0904 21:06:26.619129  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem -> /usr/share/ca-certificates/7159812.pem
	I0904 21:06:26.619162  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:06:26.619755  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 21:06:26.644760  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 21:06:26.672244  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 21:06:26.697100  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 21:06:26.721113  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0904 21:06:26.745346  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0904 21:06:26.772933  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 21:06:26.797777  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 21:06:26.823513  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/certs/715981.pem --> /usr/share/ca-certificates/715981.pem (1338 bytes)
	I0904 21:06:26.848914  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem --> /usr/share/ca-certificates/7159812.pem (1708 bytes)
	I0904 21:06:26.874021  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 21:06:26.899245  776883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 21:06:26.918614  776883 ssh_runner.go:195] Run: openssl version
	I0904 21:06:26.924447  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715981.pem && ln -fs /usr/share/ca-certificates/715981.pem /etc/ssl/certs/715981.pem"
	I0904 21:06:26.934308  776883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715981.pem
	I0904 21:06:26.938109  776883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 20:54 /usr/share/ca-certificates/715981.pem
	I0904 21:06:26.938184  776883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715981.pem
	I0904 21:06:26.945332  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715981.pem /etc/ssl/certs/51391683.0"
	I0904 21:06:26.954670  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7159812.pem && ln -fs /usr/share/ca-certificates/7159812.pem /etc/ssl/certs/7159812.pem"
	I0904 21:06:26.964553  776883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7159812.pem
	I0904 21:06:26.968314  776883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 20:54 /usr/share/ca-certificates/7159812.pem
	I0904 21:06:26.968383  776883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7159812.pem
	I0904 21:06:26.975364  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7159812.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 21:06:26.984394  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 21:06:26.993990  776883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:06:26.997543  776883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:06:26.997646  776883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:06:27.005611  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 21:06:27.017833  776883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 21:06:27.022154  776883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 21:06:27.033731  776883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 21:06:27.041349  776883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 21:06:27.048684  776883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 21:06:27.055983  776883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 21:06:27.063139  776883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 21:06:27.070447  776883 kubeadm.go:392] StartCluster: {Name:ha-067477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-067477 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:fals
e metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 21:06:27.070613  776883 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 21:06:27.070709  776883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 21:06:27.110051  776883 cri.go:89] found id: ""
	I0904 21:06:27.110144  776883 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 21:06:27.119317  776883 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0904 21:06:27.119347  776883 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0904 21:06:27.119422  776883 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0904 21:06:27.128254  776883 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0904 21:06:27.128728  776883 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-067477" does not appear in /home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 21:06:27.128843  776883 kubeconfig.go:62] /home/jenkins/minikube-integration/19575-710603/kubeconfig needs updating (will repair): [kubeconfig missing "ha-067477" cluster setting kubeconfig missing "ha-067477" context setting]
	I0904 21:06:27.129120  776883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/kubeconfig: {Name:mk99c3c6b541fdaa941aef3f7a9cb265a3595a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:06:27.129615  776883 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 21:06:27.129998  776883 kapi.go:59] client config for ha-067477: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/client.crt", KeyFile:"/home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/client.key", CAFile:"/home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cba20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 21:06:27.130476  776883 cert_rotation.go:140] Starting client certificate rotation controller
	I0904 21:06:27.130676  776883 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0904 21:06:27.139761  776883 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0904 21:06:27.139809  776883 kubeadm.go:597] duration metric: took 20.454589ms to restartPrimaryControlPlane
	I0904 21:06:27.139819  776883 kubeadm.go:394] duration metric: took 69.381538ms to StartCluster
	I0904 21:06:27.139835  776883 settings.go:142] acquiring lock: {Name:mk78ce0fd69886ee058af8e675a61cdabc51cba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:06:27.139923  776883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 21:06:27.140567  776883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-710603/kubeconfig: {Name:mk99c3c6b541fdaa941aef3f7a9cb265a3595a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:06:27.140814  776883 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 21:06:27.140840  776883 start.go:241] waiting for startup goroutines ...
	I0904 21:06:27.140863  776883 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 21:06:27.141145  776883 config.go:182] Loaded profile config "ha-067477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:06:27.144443  776883 out.go:177] * Enabled addons: 
	I0904 21:06:27.146470  776883 addons.go:510] duration metric: took 5.616995ms for enable addons: enabled=[]
	I0904 21:06:27.146509  776883 start.go:246] waiting for cluster config update ...
	I0904 21:06:27.146519  776883 start.go:255] writing updated cluster config ...
	I0904 21:06:27.149021  776883 out.go:201] 
	I0904 21:06:27.151106  776883 config.go:182] Loaded profile config "ha-067477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:06:27.151257  776883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/config.json ...
	I0904 21:06:27.154951  776883 out.go:177] * Starting "ha-067477-m02" control-plane node in "ha-067477" cluster
	I0904 21:06:27.156973  776883 cache.go:121] Beginning downloading kic base image for docker with crio
	I0904 21:06:27.158952  776883 out.go:177] * Pulling base image v0.0.45 ...
	I0904 21:06:27.160893  776883 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 21:06:27.160923  776883 cache.go:56] Caching tarball of preloaded images
	I0904 21:06:27.160919  776883 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0904 21:06:27.161024  776883 preload.go:172] Found /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0904 21:06:27.161034  776883 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0904 21:06:27.161153  776883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/config.json ...
	W0904 21:06:27.180574  776883 image.go:95] image gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 is of wrong architecture
	I0904 21:06:27.180597  776883 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0904 21:06:27.180679  776883 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0904 21:06:27.180701  776883 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0904 21:06:27.180708  776883 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0904 21:06:27.180717  776883 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0904 21:06:27.180723  776883 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from local cache
	I0904 21:06:27.181963  776883 image.go:273] response: 
	I0904 21:06:27.298460  776883 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from cached tarball
	I0904 21:06:27.298500  776883 cache.go:194] Successfully downloaded all kic artifacts
	I0904 21:06:27.298531  776883 start.go:360] acquireMachinesLock for ha-067477-m02: {Name:mke1a73d71e4737f3c96a2fc944e1a5d97f1a967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 21:06:27.298623  776883 start.go:364] duration metric: took 69.381µs to acquireMachinesLock for "ha-067477-m02"
	I0904 21:06:27.298653  776883 start.go:96] Skipping create...Using existing machine configuration
	I0904 21:06:27.298663  776883 fix.go:54] fixHost starting: m02
	I0904 21:06:27.298936  776883 cli_runner.go:164] Run: docker container inspect ha-067477-m02 --format={{.State.Status}}
	I0904 21:06:27.315008  776883 fix.go:112] recreateIfNeeded on ha-067477-m02: state=Stopped err=<nil>
	W0904 21:06:27.315035  776883 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 21:06:27.317760  776883 out.go:177] * Restarting existing docker container for "ha-067477-m02" ...
	I0904 21:06:27.320224  776883 cli_runner.go:164] Run: docker start ha-067477-m02
	I0904 21:06:27.591490  776883 cli_runner.go:164] Run: docker container inspect ha-067477-m02 --format={{.State.Status}}
	I0904 21:06:27.619653  776883 kic.go:430] container "ha-067477-m02" state is running.
	I0904 21:06:27.620024  776883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-067477-m02
	I0904 21:06:27.641030  776883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/config.json ...
	I0904 21:06:27.641284  776883 machine.go:93] provisionDockerMachine start ...
	I0904 21:06:27.641350  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m02
	I0904 21:06:27.663546  776883 main.go:141] libmachine: Using SSH client type: native
	I0904 21:06:27.663786  776883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0904 21:06:27.663799  776883 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 21:06:27.664505  776883 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0904 21:06:30.848753  776883 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-067477-m02
	
	I0904 21:06:30.848821  776883 ubuntu.go:169] provisioning hostname "ha-067477-m02"
	I0904 21:06:30.848903  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m02
	I0904 21:06:30.877565  776883 main.go:141] libmachine: Using SSH client type: native
	I0904 21:06:30.877804  776883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0904 21:06:30.877815  776883 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-067477-m02 && echo "ha-067477-m02" | sudo tee /etc/hostname
	I0904 21:06:31.092502  776883 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-067477-m02
	
	I0904 21:06:31.092589  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m02
	I0904 21:06:31.121049  776883 main.go:141] libmachine: Using SSH client type: native
	I0904 21:06:31.121289  776883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0904 21:06:31.121305  776883 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-067477-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-067477-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-067477-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 21:06:31.306283  776883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 21:06:31.306315  776883 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19575-710603/.minikube CaCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19575-710603/.minikube}
	I0904 21:06:31.306332  776883 ubuntu.go:177] setting up certificates
	I0904 21:06:31.306344  776883 provision.go:84] configureAuth start
	I0904 21:06:31.306407  776883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-067477-m02
	I0904 21:06:31.334101  776883 provision.go:143] copyHostCerts
	I0904 21:06:31.334143  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem
	I0904 21:06:31.334176  776883 exec_runner.go:144] found /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem, removing ...
	I0904 21:06:31.334186  776883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem
	I0904 21:06:31.334262  776883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem (1123 bytes)
	I0904 21:06:31.334343  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem
	I0904 21:06:31.334361  776883 exec_runner.go:144] found /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem, removing ...
	I0904 21:06:31.334365  776883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem
	I0904 21:06:31.334392  776883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem (1675 bytes)
	I0904 21:06:31.334431  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem
	I0904 21:06:31.334448  776883 exec_runner.go:144] found /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem, removing ...
	I0904 21:06:31.334452  776883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem
	I0904 21:06:31.334475  776883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem (1082 bytes)
	I0904 21:06:31.334520  776883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem org=jenkins.ha-067477-m02 san=[127.0.0.1 192.168.49.3 ha-067477-m02 localhost minikube]
	I0904 21:06:31.834850  776883 provision.go:177] copyRemoteCerts
	I0904 21:06:31.834993  776883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 21:06:31.835056  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m02
	I0904 21:06:31.852561  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477-m02/id_rsa Username:docker}
	I0904 21:06:31.968270  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0904 21:06:31.968338  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 21:06:32.035688  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0904 21:06:32.035761  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0904 21:06:32.112016  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0904 21:06:32.112085  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 21:06:32.178561  776883 provision.go:87] duration metric: took 872.199664ms to configureAuth
	I0904 21:06:32.178641  776883 ubuntu.go:193] setting minikube options for container-runtime
	I0904 21:06:32.178931  776883 config.go:182] Loaded profile config "ha-067477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:06:32.179093  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m02
	I0904 21:06:32.220510  776883 main.go:141] libmachine: Using SSH client type: native
	I0904 21:06:32.220747  776883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0904 21:06:32.220760  776883 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 21:06:32.661305  776883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 21:06:32.661389  776883 machine.go:96] duration metric: took 5.020087683s to provisionDockerMachine
	I0904 21:06:32.661416  776883 start.go:293] postStartSetup for "ha-067477-m02" (driver="docker")
	I0904 21:06:32.661456  776883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 21:06:32.661548  776883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 21:06:32.661617  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m02
	I0904 21:06:32.684388  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477-m02/id_rsa Username:docker}
	I0904 21:06:32.817739  776883 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 21:06:32.828354  776883 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 21:06:32.828392  776883 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 21:06:32.828402  776883 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 21:06:32.828409  776883 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0904 21:06:32.828420  776883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-710603/.minikube/addons for local assets ...
	I0904 21:06:32.828474  776883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-710603/.minikube/files for local assets ...
	I0904 21:06:32.828558  776883 filesync.go:149] local asset: /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem -> 7159812.pem in /etc/ssl/certs
	I0904 21:06:32.828570  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem -> /etc/ssl/certs/7159812.pem
	I0904 21:06:32.828671  776883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 21:06:32.914951  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem --> /etc/ssl/certs/7159812.pem (1708 bytes)
	I0904 21:06:33.073745  776883 start.go:296] duration metric: took 412.287261ms for postStartSetup
	I0904 21:06:33.073931  776883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:06:33.074019  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m02
	I0904 21:06:33.105667  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477-m02/id_rsa Username:docker}
	I0904 21:06:33.320220  776883 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 21:06:33.345833  776883 fix.go:56] duration metric: took 6.047161292s for fixHost
	I0904 21:06:33.345872  776883 start.go:83] releasing machines lock for "ha-067477-m02", held for 6.04723439s
	I0904 21:06:33.345942  776883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-067477-m02
	I0904 21:06:33.376857  776883 out.go:177] * Found network options:
	I0904 21:06:33.379504  776883 out.go:177]   - NO_PROXY=192.168.49.2
	W0904 21:06:33.382373  776883 proxy.go:119] fail to check proxy env: Error ip not in block
	W0904 21:06:33.382433  776883 proxy.go:119] fail to check proxy env: Error ip not in block
	I0904 21:06:33.382511  776883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 21:06:33.382561  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m02
	I0904 21:06:33.382856  776883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 21:06:33.382916  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m02
	I0904 21:06:33.414062  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477-m02/id_rsa Username:docker}
	I0904 21:06:33.426156  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477-m02/id_rsa Username:docker}
	I0904 21:06:33.889938  776883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 21:06:33.917809  776883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 21:06:33.945026  776883 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 21:06:33.945107  776883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 21:06:33.960101  776883 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 21:06:33.960131  776883 start.go:495] detecting cgroup driver to use...
	I0904 21:06:33.960164  776883 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 21:06:33.960216  776883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 21:06:33.983066  776883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 21:06:34.004256  776883 docker.go:217] disabling cri-docker service (if available) ...
	I0904 21:06:34.004341  776883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 21:06:34.032171  776883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 21:06:34.060119  776883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 21:06:34.401644  776883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 21:06:34.717008  776883 docker.go:233] disabling docker service ...
	I0904 21:06:34.717129  776883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 21:06:34.783174  776883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 21:06:34.849024  776883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 21:06:35.190369  776883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 21:06:35.544185  776883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 21:06:35.583302  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 21:06:35.664666  776883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0904 21:06:35.664774  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:35.710378  776883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 21:06:35.710512  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:35.772905  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:35.824910  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:35.872720  776883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 21:06:35.928243  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:35.994443  776883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:36.024517  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:06:36.044076  776883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 21:06:36.066705  776883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 21:06:36.080253  776883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:06:36.383494  776883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 21:06:37.834864  776883 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.451287042s)
	I0904 21:06:37.834939  776883 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 21:06:37.835019  776883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 21:06:37.842560  776883 start.go:563] Will wait 60s for crictl version
	I0904 21:06:37.842691  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:06:37.858281  776883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 21:06:37.942173  776883 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 21:06:37.942344  776883 ssh_runner.go:195] Run: crio --version
	I0904 21:06:38.030793  776883 ssh_runner.go:195] Run: crio --version
	I0904 21:06:38.115500  776883 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0904 21:06:38.117448  776883 out.go:177]   - env NO_PROXY=192.168.49.2
	I0904 21:06:38.119535  776883 cli_runner.go:164] Run: docker network inspect ha-067477 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 21:06:38.186536  776883 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 21:06:38.190504  776883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 21:06:38.212724  776883 mustload.go:65] Loading cluster: ha-067477
	I0904 21:06:38.212971  776883 config.go:182] Loaded profile config "ha-067477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:06:38.213260  776883 cli_runner.go:164] Run: docker container inspect ha-067477 --format={{.State.Status}}
	I0904 21:06:38.242029  776883 host.go:66] Checking if "ha-067477" exists ...
	I0904 21:06:38.242344  776883 certs.go:68] Setting up /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477 for IP: 192.168.49.3
	I0904 21:06:38.242364  776883 certs.go:194] generating shared ca certs ...
	I0904 21:06:38.242384  776883 certs.go:226] acquiring lock for ca certs: {Name:mkc3a04cbc0797b819dd3c9fec2eaef93961640b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:06:38.242504  776883 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key
	I0904 21:06:38.242552  776883 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key
	I0904 21:06:38.242563  776883 certs.go:256] generating profile certs ...
	I0904 21:06:38.242664  776883 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/client.key
	I0904 21:06:38.242734  776883 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.key.049dd765
	I0904 21:06:38.242776  776883 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/proxy-client.key
	I0904 21:06:38.242790  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0904 21:06:38.242809  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0904 21:06:38.242831  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0904 21:06:38.242843  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0904 21:06:38.242859  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0904 21:06:38.242877  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0904 21:06:38.242894  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0904 21:06:38.242907  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0904 21:06:38.242975  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/715981.pem (1338 bytes)
	W0904 21:06:38.243013  776883 certs.go:480] ignoring /home/jenkins/minikube-integration/19575-710603/.minikube/certs/715981_empty.pem, impossibly tiny 0 bytes
	I0904 21:06:38.243030  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 21:06:38.243066  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem (1082 bytes)
	I0904 21:06:38.243097  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem (1123 bytes)
	I0904 21:06:38.243123  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem (1675 bytes)
	I0904 21:06:38.243178  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem (1708 bytes)
	I0904 21:06:38.243211  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:06:38.243227  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/715981.pem -> /usr/share/ca-certificates/715981.pem
	I0904 21:06:38.243238  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem -> /usr/share/ca-certificates/7159812.pem
	I0904 21:06:38.243303  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477
	I0904 21:06:38.278494  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33589 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477/id_rsa Username:docker}
	I0904 21:06:38.374143  776883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0904 21:06:38.390735  776883 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0904 21:06:38.410420  776883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0904 21:06:38.419155  776883 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0904 21:06:38.440511  776883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0904 21:06:38.444045  776883 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0904 21:06:38.457734  776883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0904 21:06:38.471763  776883 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0904 21:06:38.503937  776883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0904 21:06:38.515093  776883 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0904 21:06:38.546597  776883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0904 21:06:38.557007  776883 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0904 21:06:38.580177  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 21:06:38.631648  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 21:06:38.670522  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 21:06:38.706872  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 21:06:38.771236  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0904 21:06:38.853366  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0904 21:06:38.943038  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 21:06:39.002397  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 21:06:39.120164  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 21:06:39.178577  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/certs/715981.pem --> /usr/share/ca-certificates/715981.pem (1338 bytes)
	I0904 21:06:39.212196  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem --> /usr/share/ca-certificates/7159812.pem (1708 bytes)
	I0904 21:06:39.243456  776883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0904 21:06:39.264781  776883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0904 21:06:39.295195  776883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0904 21:06:39.328079  776883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0904 21:06:39.347805  776883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0904 21:06:39.369777  776883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0904 21:06:39.388737  776883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0904 21:06:39.408756  776883 ssh_runner.go:195] Run: openssl version
	I0904 21:06:39.414879  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 21:06:39.423990  776883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:06:39.427964  776883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:06:39.428090  776883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:06:39.436011  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 21:06:39.444800  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715981.pem && ln -fs /usr/share/ca-certificates/715981.pem /etc/ssl/certs/715981.pem"
	I0904 21:06:39.454682  776883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715981.pem
	I0904 21:06:39.461439  776883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 20:54 /usr/share/ca-certificates/715981.pem
	I0904 21:06:39.461531  776883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715981.pem
	I0904 21:06:39.469143  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715981.pem /etc/ssl/certs/51391683.0"
	I0904 21:06:39.478324  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7159812.pem && ln -fs /usr/share/ca-certificates/7159812.pem /etc/ssl/certs/7159812.pem"
	I0904 21:06:39.487513  776883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7159812.pem
	I0904 21:06:39.491234  776883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 20:54 /usr/share/ca-certificates/7159812.pem
	I0904 21:06:39.491325  776883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7159812.pem
	I0904 21:06:39.498684  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7159812.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 21:06:39.507599  776883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 21:06:39.511565  776883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 21:06:39.519430  776883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 21:06:39.527504  776883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 21:06:39.535154  776883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 21:06:39.545627  776883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 21:06:39.554089  776883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 21:06:39.562024  776883 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.0 crio true true} ...
	I0904 21:06:39.562152  776883 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-067477-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-067477 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 21:06:39.562225  776883 kube-vip.go:115] generating kube-vip config ...
	I0904 21:06:39.562277  776883 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0904 21:06:39.578340  776883 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0904 21:06:39.578434  776883 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0904 21:06:39.578516  776883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0904 21:06:39.587878  776883 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 21:06:39.587983  776883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0904 21:06:39.597116  776883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0904 21:06:39.615471  776883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 21:06:39.652231  776883 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0904 21:06:39.687686  776883 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0904 21:06:39.691649  776883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 21:06:39.709004  776883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:06:39.909472  776883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 21:06:39.936710  776883 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 21:06:39.937021  776883 config.go:182] Loaded profile config "ha-067477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:06:39.941946  776883 out.go:177] * Verifying Kubernetes components...
	I0904 21:06:39.944291  776883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:06:40.107465  776883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 21:06:40.127031  776883 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 21:06:40.127415  776883 kapi.go:59] client config for ha-067477: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/client.crt", KeyFile:"/home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/client.key", CAFile:"/home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cba20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0904 21:06:40.127493  776883 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0904 21:06:40.127758  776883 node_ready.go:35] waiting up to 6m0s for node "ha-067477-m02" to be "Ready" ...
	I0904 21:06:40.127859  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:06:40.128049  776883 round_trippers.go:469] Request Headers:
	I0904 21:06:40.128084  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:06:40.128092  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:06:51.252243  776883 round_trippers.go:574] Response Status: 500 Internal Server Error in 11124 milliseconds
	I0904 21:06:51.253159  776883 node_ready.go:53] error getting node "ha-067477-m02": etcdserver: request timed out
	I0904 21:06:51.253267  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:06:51.253273  776883 round_trippers.go:469] Request Headers:
	I0904 21:06:51.253282  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:06:51.253287  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:06:59.699607  776883 round_trippers.go:574] Response Status: 500 Internal Server Error in 8446 milliseconds
	I0904 21:06:59.703265  776883 node_ready.go:53] error getting node "ha-067477-m02": etcdserver: leader changed
	I0904 21:06:59.703334  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:06:59.703340  776883 round_trippers.go:469] Request Headers:
	I0904 21:06:59.703348  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:06:59.703354  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:06:59.718024  776883 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0904 21:06:59.719386  776883 node_ready.go:49] node "ha-067477-m02" has status "Ready":"True"
	I0904 21:06:59.719405  776883 node_ready.go:38] duration metric: took 19.591626096s for node "ha-067477-m02" to be "Ready" ...
	I0904 21:06:59.719415  776883 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 21:06:59.719455  776883 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0904 21:06:59.719466  776883 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0904 21:06:59.719526  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0904 21:06:59.719531  776883 round_trippers.go:469] Request Headers:
	I0904 21:06:59.719539  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:06:59.719542  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:06:59.746665  776883 round_trippers.go:574] Response Status: 429 Too Many Requests in 27 milliseconds
	I0904 21:07:00.747316  776883 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0904 21:07:00.747375  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0904 21:07:00.747382  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:00.747391  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:00.747397  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:00.813657  776883 round_trippers.go:574] Response Status: 429 Too Many Requests in 66 milliseconds
	I0904 21:07:01.819235  776883 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0904 21:07:01.819299  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0904 21:07:01.819305  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:01.819314  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:01.819320  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:01.826876  776883 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0904 21:07:01.875602  776883 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:01.875784  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:07:01.875810  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:01.875835  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:01.875856  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:01.881712  776883 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0904 21:07:01.886868  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:01.886890  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:01.886899  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:01.886905  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:01.899178  776883 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0904 21:07:01.899746  776883 pod_ready.go:93] pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace has status "Ready":"True"
	I0904 21:07:01.899787  776883 pod_ready.go:82] duration metric: took 24.107085ms for pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:01.899817  776883 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qdnlw" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:01.899911  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qdnlw
	I0904 21:07:01.899938  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:01.899961  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:01.899982  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:01.902990  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:01.903745  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:01.903792  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:01.903817  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:01.903837  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:01.909613  776883 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0904 21:07:01.910694  776883 pod_ready.go:93] pod "coredns-6f6b679f8f-qdnlw" in "kube-system" namespace has status "Ready":"True"
	I0904 21:07:01.910752  776883 pod_ready.go:82] duration metric: took 10.912925ms for pod "coredns-6f6b679f8f-qdnlw" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:01.910779  776883 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-067477" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:01.910878  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-067477
	I0904 21:07:01.910904  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:01.910926  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:01.910946  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:01.917612  776883 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0904 21:07:01.918326  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:01.918371  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:01.918393  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:01.918415  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:01.921655  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:01.922290  776883 pod_ready.go:93] pod "etcd-ha-067477" in "kube-system" namespace has status "Ready":"True"
	I0904 21:07:01.922332  776883 pod_ready.go:82] duration metric: took 11.531454ms for pod "etcd-ha-067477" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:01.922358  776883 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:01.922458  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-067477-m02
	I0904 21:07:01.922483  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:01.922517  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:01.922534  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:01.927525  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:07:01.928184  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:01.928223  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:01.928246  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:01.928267  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:01.933075  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:07:01.933649  776883 pod_ready.go:93] pod "etcd-ha-067477-m02" in "kube-system" namespace has status "Ready":"True"
	I0904 21:07:01.933689  776883 pod_ready.go:82] duration metric: took 11.31006ms for pod "etcd-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:01.933717  776883 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:01.933808  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-067477-m03
	I0904 21:07:01.933835  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:01.933868  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:01.933888  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:01.938736  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:07:02.019617  776883 request.go:632] Waited for 80.21107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m03
	I0904 21:07:02.019736  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m03
	I0904 21:07:02.019768  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:02.019799  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:02.019821  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:02.024531  776883 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0904 21:07:02.024721  776883 pod_ready.go:98] node "ha-067477-m03" hosting pod "etcd-ha-067477-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-067477-m03": nodes "ha-067477-m03" not found
	I0904 21:07:02.024756  776883 pod_ready.go:82] duration metric: took 91.017356ms for pod "etcd-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	E0904 21:07:02.024782  776883 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-067477-m03" hosting pod "etcd-ha-067477-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-067477-m03": nodes "ha-067477-m03" not found
	I0904 21:07:02.024827  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-067477" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:02.220054  776883 request.go:632] Waited for 195.118106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-067477
	I0904 21:07:02.220152  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-067477
	I0904 21:07:02.220172  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:02.220204  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:02.220241  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:02.234227  776883 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0904 21:07:02.419292  776883 request.go:632] Waited for 184.233348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:02.419398  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:02.419431  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:02.419459  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:02.419479  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:02.422148  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:02.423103  776883 pod_ready.go:93] pod "kube-apiserver-ha-067477" in "kube-system" namespace has status "Ready":"True"
	I0904 21:07:02.423162  776883 pod_ready.go:82] duration metric: took 398.304778ms for pod "kube-apiserver-ha-067477" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:02.423196  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:02.619603  776883 request.go:632] Waited for 196.305402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-067477-m02
	I0904 21:07:02.619731  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-067477-m02
	I0904 21:07:02.619770  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:02.619793  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:02.619814  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:02.623223  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:02.819296  776883 request.go:632] Waited for 195.287586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:02.819429  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:02.819446  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:02.819455  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:02.819459  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:02.823555  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:07:02.824775  776883 pod_ready.go:93] pod "kube-apiserver-ha-067477-m02" in "kube-system" namespace has status "Ready":"True"
	I0904 21:07:02.824843  776883 pod_ready.go:82] duration metric: took 401.625113ms for pod "kube-apiserver-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:02.824878  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:03.021102  776883 request.go:632] Waited for 196.110576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-067477-m03
	I0904 21:07:03.021234  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-067477-m03
	I0904 21:07:03.021267  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:03.021298  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:03.021321  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:03.035190  776883 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0904 21:07:03.219768  776883 request.go:632] Waited for 183.289066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m03
	I0904 21:07:03.219896  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m03
	I0904 21:07:03.219907  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:03.219916  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:03.219921  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:03.222765  776883 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0904 21:07:03.222966  776883 pod_ready.go:98] node "ha-067477-m03" hosting pod "kube-apiserver-ha-067477-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-067477-m03": nodes "ha-067477-m03" not found
	I0904 21:07:03.223006  776883 pod_ready.go:82] duration metric: took 398.085427ms for pod "kube-apiserver-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	E0904 21:07:03.223030  776883 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-067477-m03" hosting pod "kube-apiserver-ha-067477-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-067477-m03": nodes "ha-067477-m03" not found
	I0904 21:07:03.223052  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-067477" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:03.419383  776883 request.go:632] Waited for 196.238876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-067477
	I0904 21:07:03.419441  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-067477
	I0904 21:07:03.419448  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:03.419456  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:03.419461  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:03.422155  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:03.619247  776883 request.go:632] Waited for 196.058581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:03.619329  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:03.619343  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:03.619352  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:03.619358  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:03.622966  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:03.623890  776883 pod_ready.go:93] pod "kube-controller-manager-ha-067477" in "kube-system" namespace has status "Ready":"True"
	I0904 21:07:03.623915  776883 pod_ready.go:82] duration metric: took 400.846087ms for pod "kube-controller-manager-ha-067477" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:03.623928  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:03.820311  776883 request.go:632] Waited for 196.299404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-067477-m02
	I0904 21:07:03.820373  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-067477-m02
	I0904 21:07:03.820383  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:03.820392  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:03.820400  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:03.823269  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:04.021055  776883 request.go:632] Waited for 196.820065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:04.021116  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:04.021122  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:04.021131  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:04.021136  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:04.033989  776883 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0904 21:07:04.035245  776883 pod_ready.go:93] pod "kube-controller-manager-ha-067477-m02" in "kube-system" namespace has status "Ready":"True"
	I0904 21:07:04.035275  776883 pod_ready.go:82] duration metric: took 411.33827ms for pod "kube-controller-manager-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:04.035289  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:04.219990  776883 request.go:632] Waited for 184.600372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-067477-m03
	I0904 21:07:04.220075  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-067477-m03
	I0904 21:07:04.220086  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:04.220094  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:04.220104  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:04.225570  776883 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0904 21:07:04.420128  776883 request.go:632] Waited for 193.338126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m03
	I0904 21:07:04.420280  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m03
	I0904 21:07:04.420320  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:04.420335  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:04.420342  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:04.423056  776883 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0904 21:07:04.423240  776883 pod_ready.go:98] node "ha-067477-m03" hosting pod "kube-controller-manager-ha-067477-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-067477-m03": nodes "ha-067477-m03" not found
	I0904 21:07:04.423260  776883 pod_ready.go:82] duration metric: took 387.963197ms for pod "kube-controller-manager-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	E0904 21:07:04.423282  776883 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-067477-m03" hosting pod "kube-controller-manager-ha-067477-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-067477-m03": nodes "ha-067477-m03" not found
	I0904 21:07:04.423295  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7h6l2" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:04.619494  776883 request.go:632] Waited for 196.120545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:04.619568  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:04.619577  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:04.619586  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:04.619590  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:04.624677  776883 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0904 21:07:04.820053  776883 request.go:632] Waited for 194.37446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:04.820256  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:04.820299  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:04.820323  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:04.820341  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:04.823488  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:05.020685  776883 request.go:632] Waited for 96.741672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:05.020764  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:05.020777  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:05.020787  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:05.020797  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:05.025004  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:07:05.219347  776883 request.go:632] Waited for 193.246453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:05.219425  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:05.219436  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:05.219445  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:05.219453  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:05.222663  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:05.424360  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:05.424386  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:05.424396  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:05.424400  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:05.427473  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:05.619564  776883 request.go:632] Waited for 191.319022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:05.619664  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:05.619692  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:05.619707  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:05.619715  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:05.623039  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:05.923597  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:05.923619  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:05.923629  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:05.923641  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:05.931192  776883 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0904 21:07:06.019794  776883 request.go:632] Waited for 85.273438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:06.019857  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:06.019863  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:06.019872  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:06.019876  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:06.032058  776883 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0904 21:07:06.423583  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:06.423606  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:06.423616  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:06.423622  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:06.426390  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:06.427204  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:06.427221  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:06.427232  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:06.427253  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:06.429800  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:06.430504  776883 pod_ready.go:103] pod "kube-proxy-7h6l2" in "kube-system" namespace has status "Ready":"False"
	I0904 21:07:06.924289  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:06.924317  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:06.924328  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:06.924333  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:06.927328  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:06.928074  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:06.928095  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:06.928103  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:06.928108  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:06.930836  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:07.424462  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:07.424488  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:07.424498  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:07.424501  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:07.427594  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:07.428270  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:07.428289  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:07.428298  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:07.428303  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:07.431010  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:07.923749  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:07.923770  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:07.923780  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:07.923785  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:07.926589  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:07.927381  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:07.927399  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:07.927409  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:07.927413  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:07.929901  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:08.423690  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:08.423714  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:08.423723  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:08.423727  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:08.426602  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:08.427306  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:08.427318  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:08.427327  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:08.427331  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:08.429838  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:08.430661  776883 pod_ready.go:103] pod "kube-proxy-7h6l2" in "kube-system" namespace has status "Ready":"False"
	I0904 21:07:08.923524  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:08.923546  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:08.923556  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:08.923561  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:08.927159  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:08.927824  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:08.927839  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:08.927849  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:08.927852  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:08.930371  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:09.423578  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:09.423604  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:09.423613  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:09.423617  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:09.426547  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:09.427317  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:09.427339  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:09.427349  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:09.427353  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:09.429949  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:09.923520  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:09.923546  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:09.923557  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:09.923561  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:09.926714  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:09.927589  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:09.927611  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:09.927622  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:09.927626  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:09.930300  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:10.424224  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:10.424246  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:10.424255  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:10.424259  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:10.432364  776883 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0904 21:07:10.433504  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:10.433533  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:10.433542  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:10.433548  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:10.436195  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:10.437223  776883 pod_ready.go:103] pod "kube-proxy-7h6l2" in "kube-system" namespace has status "Ready":"False"
	I0904 21:07:10.924046  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:10.924076  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:10.924085  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:10.924090  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:10.929433  776883 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0904 21:07:10.930917  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:10.930938  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:10.930955  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:10.930960  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:10.938534  776883 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0904 21:07:11.423563  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:11.423584  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:11.423596  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:11.423600  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:11.426828  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:11.427527  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:11.427546  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:11.427554  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:11.427560  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:11.429988  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:11.923810  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:11.923831  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:11.923841  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:11.923846  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:11.930513  776883 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0904 21:07:11.931541  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:11.931559  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:11.931568  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:11.931572  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:11.934772  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:12.423578  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:12.423602  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:12.423611  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:12.423616  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:12.427066  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:12.428340  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:12.428360  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:12.428374  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:12.428404  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:12.431345  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:12.923639  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:12.923663  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:12.923681  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:12.923688  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:12.927312  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:12.928910  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:12.928929  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:12.928938  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:12.928956  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:12.932767  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:12.933668  776883 pod_ready.go:103] pod "kube-proxy-7h6l2" in "kube-system" namespace has status "Ready":"False"
	I0904 21:07:13.424466  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:13.424491  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:13.424501  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:13.424505  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:13.427416  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:13.428393  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:13.428415  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:13.428424  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:13.428429  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:13.431089  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:13.923549  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:13.923579  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:13.923589  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:13.923593  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:13.926530  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:13.927423  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:13.927445  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:13.927453  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:13.927456  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:13.929830  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:14.424322  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:14.424347  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:14.424355  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:14.424359  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:14.427246  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:14.427863  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:14.427884  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:14.427893  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:14.427898  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:14.430437  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:14.923553  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:14.923579  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:14.923589  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:14.923597  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:14.926465  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:14.927240  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:14.927259  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:14.927269  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:14.927274  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:14.929841  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:15.423934  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:15.423960  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:15.423970  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:15.423974  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:15.426983  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:15.427781  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:15.427804  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:15.427813  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:15.427819  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:15.430453  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:15.431241  776883 pod_ready.go:103] pod "kube-proxy-7h6l2" in "kube-system" namespace has status "Ready":"False"
	I0904 21:07:15.924507  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:15.924536  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:15.924547  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:15.924552  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:15.927669  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:15.928388  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:15.928408  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:15.928417  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:15.928423  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:15.930925  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:16.423585  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:16.423610  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:16.423619  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:16.423623  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:16.426722  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:16.427511  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:16.427530  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:16.427539  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:16.427544  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:16.430251  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:16.923561  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:16.923589  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:16.923600  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:16.923607  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:16.926657  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:16.927385  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:16.927407  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:16.927419  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:16.927424  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:16.930862  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:17.423884  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:17.423909  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:17.423918  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:17.423922  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:17.426950  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:17.427720  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:17.427742  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:17.427752  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:17.427757  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:17.430438  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:17.924448  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:17.924470  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:17.924479  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:17.924484  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:17.927830  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:17.928751  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:17.928775  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:17.928784  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:17.928789  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:17.931618  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:17.932267  776883 pod_ready.go:103] pod "kube-proxy-7h6l2" in "kube-system" namespace has status "Ready":"False"
	I0904 21:07:18.424393  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:18.424421  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:18.424429  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:18.424433  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:18.427498  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:18.428237  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:18.428256  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:18.428265  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:18.428269  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:18.431439  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:18.923590  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:18.923613  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:18.923623  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:18.923628  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:18.927015  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:18.928337  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:18.928384  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:18.928395  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:18.928400  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:18.931403  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:19.424203  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:19.424232  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:19.424243  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:19.424247  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:19.427129  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:19.427717  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:19.427734  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:19.427744  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:19.427748  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:19.430259  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:19.924174  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:19.924205  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:19.924215  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:19.924221  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:19.927359  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:19.928056  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:19.928074  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:19.928084  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:19.928088  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:19.931512  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:19.932846  776883 pod_ready.go:103] pod "kube-proxy-7h6l2" in "kube-system" namespace has status "Ready":"False"
	I0904 21:07:20.423593  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:07:20.423616  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:20.423625  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:20.423631  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:20.426465  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:20.427287  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:20.427307  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:20.427317  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:20.427321  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:20.429968  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:20.430730  776883 pod_ready.go:93] pod "kube-proxy-7h6l2" in "kube-system" namespace has status "Ready":"True"
	I0904 21:07:20.430753  776883 pod_ready.go:82] duration metric: took 16.007448074s for pod "kube-proxy-7h6l2" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:20.430767  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9c6g4" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:20.430868  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9c6g4
	I0904 21:07:20.430879  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:20.430887  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:20.430900  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:20.433560  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:20.434428  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m04
	I0904 21:07:20.434449  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:20.434459  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:20.434463  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:20.437005  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:20.437552  776883 pod_ready.go:93] pod "kube-proxy-9c6g4" in "kube-system" namespace has status "Ready":"True"
	I0904 21:07:20.437571  776883 pod_ready.go:82] duration metric: took 6.772547ms for pod "kube-proxy-9c6g4" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:20.437584  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n8z9c" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:20.437655  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:07:20.437668  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:20.437676  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:20.437680  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:20.440268  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:20.440976  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:20.440994  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:20.441003  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:20.441007  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:20.443463  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:20.937943  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:07:20.937964  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:20.937974  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:20.938005  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:20.945565  776883 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0904 21:07:20.946404  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:20.946424  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:20.946433  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:20.946437  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:20.963906  776883 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0904 21:07:21.438049  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:07:21.438075  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:21.438085  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:21.438090  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:21.445762  776883 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0904 21:07:21.447182  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:21.447202  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:21.447212  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:21.447215  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:21.449946  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:21.938213  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:07:21.938253  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:21.938262  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:21.938268  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:21.940839  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:21.941980  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:21.942002  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:21.942011  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:21.942024  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:21.944599  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:22.437837  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:07:22.437916  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:22.437927  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:22.437932  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:22.440700  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:22.441459  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:22.441479  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:22.441489  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:22.441494  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:22.443948  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:22.444675  776883 pod_ready.go:103] pod "kube-proxy-n8z9c" in "kube-system" namespace has status "Ready":"False"
	I0904 21:07:22.937836  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:07:22.937891  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:22.937902  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:22.937907  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:22.946847  776883 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0904 21:07:22.949603  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:22.949665  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:22.949691  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:22.949714  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:22.952940  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:23.438054  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:07:23.438126  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:23.438160  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:23.438193  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:23.441602  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:23.442374  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:23.442430  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:23.442453  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:23.442481  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:23.445300  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:23.938207  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:07:23.938228  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:23.938238  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:23.938244  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:23.959042  776883 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0904 21:07:23.959922  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:23.959937  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:23.959944  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:23.959948  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:23.969807  776883 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0904 21:07:24.438371  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:07:24.438395  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:24.438405  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:24.438410  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:24.441299  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:24.441952  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:24.441972  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:24.441981  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:24.441986  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:24.444946  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:24.445518  776883 pod_ready.go:103] pod "kube-proxy-n8z9c" in "kube-system" namespace has status "Ready":"False"
	I0904 21:07:24.938165  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:07:24.938194  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:24.938204  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:24.938215  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:24.941789  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:24.942392  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:24.942403  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:24.942411  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:24.942416  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:24.945091  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:25.438399  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:07:25.438425  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:25.438435  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:25.438439  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:25.441416  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:25.442185  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:25.442204  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:25.442211  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:25.442218  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:25.444922  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:25.938264  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:07:25.938288  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:25.938298  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:25.938302  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:25.942406  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:07:25.943252  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:25.943273  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:25.943283  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:25.943287  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:25.947113  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:25.948065  776883 pod_ready.go:93] pod "kube-proxy-n8z9c" in "kube-system" namespace has status "Ready":"True"
	I0904 21:07:25.948087  776883 pod_ready.go:82] duration metric: took 5.510490451s for pod "kube-proxy-n8z9c" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:25.948099  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v2r5c" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:25.948174  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v2r5c
	I0904 21:07:25.948186  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:25.948194  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:25.948198  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:25.952814  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:07:25.954131  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m03
	I0904 21:07:25.954154  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:25.954163  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:25.954167  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:25.959646  776883 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0904 21:07:25.959983  776883 pod_ready.go:98] node "ha-067477-m03" hosting pod "kube-proxy-v2r5c" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-067477-m03": nodes "ha-067477-m03" not found
	I0904 21:07:25.960006  776883 pod_ready.go:82] duration metric: took 11.898748ms for pod "kube-proxy-v2r5c" in "kube-system" namespace to be "Ready" ...
	E0904 21:07:25.960016  776883 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-067477-m03" hosting pod "kube-proxy-v2r5c" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-067477-m03": nodes "ha-067477-m03" not found
	I0904 21:07:25.960023  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-067477" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:25.960097  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-067477
	I0904 21:07:25.960109  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:25.960126  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:25.960133  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:25.963137  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:07:25.964129  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:07:25.964146  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:25.964165  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:25.964171  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:25.968077  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:25.969136  776883 pod_ready.go:93] pod "kube-scheduler-ha-067477" in "kube-system" namespace has status "Ready":"True"
	I0904 21:07:25.969157  776883 pod_ready.go:82] duration metric: took 9.12524ms for pod "kube-scheduler-ha-067477" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:25.969169  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:25.969239  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-067477-m02
	I0904 21:07:25.969251  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:25.969266  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:25.969274  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:25.984397  776883 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0904 21:07:25.985262  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:07:25.985282  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:25.985292  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:25.985296  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:25.989442  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:07:25.990337  776883 pod_ready.go:93] pod "kube-scheduler-ha-067477-m02" in "kube-system" namespace has status "Ready":"True"
	I0904 21:07:25.990368  776883 pod_ready.go:82] duration metric: took 21.191091ms for pod "kube-scheduler-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:25.990380  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	I0904 21:07:25.990471  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-067477-m03
	I0904 21:07:25.990482  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:25.990491  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:25.990495  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:25.995154  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:07:25.995840  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m03
	I0904 21:07:25.995866  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:25.995875  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:25.995882  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:26.018314  776883 round_trippers.go:574] Response Status: 404 Not Found in 22 milliseconds
	I0904 21:07:26.018454  776883 pod_ready.go:98] node "ha-067477-m03" hosting pod "kube-scheduler-ha-067477-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-067477-m03": nodes "ha-067477-m03" not found
	I0904 21:07:26.018478  776883 pod_ready.go:82] duration metric: took 28.087336ms for pod "kube-scheduler-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	E0904 21:07:26.018489  776883 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-067477-m03" hosting pod "kube-scheduler-ha-067477-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-067477-m03": nodes "ha-067477-m03" not found
	I0904 21:07:26.018499  776883 pod_ready.go:39] duration metric: took 26.299069246s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 21:07:26.018514  776883 api_server.go:52] waiting for apiserver process to appear ...
	I0904 21:07:26.018588  776883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:07:26.051051  776883 api_server.go:72] duration metric: took 46.114235803s to wait for apiserver process to appear ...
	I0904 21:07:26.051080  776883 api_server.go:88] waiting for apiserver healthz status ...
	I0904 21:07:26.051113  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:26.059274  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:26.059315  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:26.552060  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:26.560571  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:26.560609  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:27.052041  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:27.059884  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:27.059913  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:27.551722  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:27.559342  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:27.559368  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:28.051904  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:28.059858  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:28.059889  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:28.551326  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:28.559357  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:28.559410  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:29.051992  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:29.061314  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:29.061381  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:29.552024  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:29.560565  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:29.560601  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:30.053432  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:30.114101  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:30.114182  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:30.551757  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:30.559827  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:30.559857  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:31.051243  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:31.062722  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:31.062760  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:31.551233  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:31.559135  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:31.559173  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:32.051765  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:32.061671  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:32.061703  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:32.552039  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:32.559820  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:32.559849  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:33.051339  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:33.062438  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:33.062482  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:33.552046  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:33.559872  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:33.559916  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:34.052743  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:34.060964  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:34.061085  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:34.551795  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:34.559594  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:34.559626  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:35.051249  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:35.067199  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:35.067308  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:35.551966  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:35.560352  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:35.560382  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:36.051561  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:36.059327  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:36.059359  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:36.551669  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:36.559713  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:36.559758  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:37.051162  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:37.060336  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:37.060366  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:37.551773  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:37.560160  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:37.560200  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:38.051423  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:38.059788  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:38.059819  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:38.551384  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:38.559048  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:38.559089  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:39.051268  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:39.059100  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:39.059131  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:39.552093  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:39.560616  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:39.560645  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:40.052921  776883 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0904 21:07:40.053103  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0904 21:07:40.108293  776883 cri.go:89] found id: "f1b65a4bd313096b7f38ef67386c0985463f30603d15e4b2df709afb852779f2"
	I0904 21:07:40.108369  776883 cri.go:89] found id: "92f57fd44ad1efdf2fda7bd233dfd03521520327ce2d18e80f597972162d6466"
	I0904 21:07:40.108389  776883 cri.go:89] found id: ""
	I0904 21:07:40.108414  776883 logs.go:276] 2 containers: [f1b65a4bd313096b7f38ef67386c0985463f30603d15e4b2df709afb852779f2 92f57fd44ad1efdf2fda7bd233dfd03521520327ce2d18e80f597972162d6466]
	I0904 21:07:40.108510  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:40.112487  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:40.116256  776883 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0904 21:07:40.116345  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0904 21:07:40.156343  776883 cri.go:89] found id: "38b58b25574a591f05db66e5937b5a1fdb36ebf7572b5472d80a2e455e6c7634"
	I0904 21:07:40.156376  776883 cri.go:89] found id: "af040434c9ee343ad84b7be540821dfc19895f1fac5004fc8eaf496b492c4ec2"
	I0904 21:07:40.156383  776883 cri.go:89] found id: ""
	I0904 21:07:40.156391  776883 logs.go:276] 2 containers: [38b58b25574a591f05db66e5937b5a1fdb36ebf7572b5472d80a2e455e6c7634 af040434c9ee343ad84b7be540821dfc19895f1fac5004fc8eaf496b492c4ec2]
	I0904 21:07:40.156468  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:40.160387  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:40.164219  776883 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0904 21:07:40.164300  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0904 21:07:40.211181  776883 cri.go:89] found id: ""
	I0904 21:07:40.211247  776883 logs.go:276] 0 containers: []
	W0904 21:07:40.211274  776883 logs.go:278] No container was found matching "coredns"
	I0904 21:07:40.211300  776883 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0904 21:07:40.211370  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0904 21:07:40.250328  776883 cri.go:89] found id: "8abae600995264383324ac746e9a84d46d4e878447ec8cecb9dd3636fcf0e716"
	I0904 21:07:40.250353  776883 cri.go:89] found id: "1aeda06413dc144d75989cc07a38c671e89474416c91c54ff4c93c192b9647de"
	I0904 21:07:40.250360  776883 cri.go:89] found id: ""
	I0904 21:07:40.250367  776883 logs.go:276] 2 containers: [8abae600995264383324ac746e9a84d46d4e878447ec8cecb9dd3636fcf0e716 1aeda06413dc144d75989cc07a38c671e89474416c91c54ff4c93c192b9647de]
	I0904 21:07:40.250437  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:40.254273  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:40.257948  776883 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0904 21:07:40.258047  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0904 21:07:40.301354  776883 cri.go:89] found id: "538900e233999cf485983ab9334a6a6f7d6d97f92ca72e7f90b0ea9cf653105a"
	I0904 21:07:40.301375  776883 cri.go:89] found id: ""
	I0904 21:07:40.301383  776883 logs.go:276] 1 containers: [538900e233999cf485983ab9334a6a6f7d6d97f92ca72e7f90b0ea9cf653105a]
	I0904 21:07:40.301456  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:40.305126  776883 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0904 21:07:40.305203  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0904 21:07:40.341810  776883 cri.go:89] found id: "a19d7db2de5227459edf78ba259765b859a484d1f8442dd3ce14e6edf876d6ad"
	I0904 21:07:40.341928  776883 cri.go:89] found id: "dc2220b66c14d41d46fa2f2e4672cd63f0ab5f837d463ea63dae74a0c2b6f91f"
	I0904 21:07:40.341945  776883 cri.go:89] found id: ""
	I0904 21:07:40.341954  776883 logs.go:276] 2 containers: [a19d7db2de5227459edf78ba259765b859a484d1f8442dd3ce14e6edf876d6ad dc2220b66c14d41d46fa2f2e4672cd63f0ab5f837d463ea63dae74a0c2b6f91f]
	I0904 21:07:40.342026  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:40.345615  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:40.349044  776883 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0904 21:07:40.349120  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0904 21:07:40.389914  776883 cri.go:89] found id: "955c050c3299657026c6b6a6fa240d6e7a5a8d75bb98a8f8d939a62eacd59a5d"
	I0904 21:07:40.389938  776883 cri.go:89] found id: ""
	I0904 21:07:40.389947  776883 logs.go:276] 1 containers: [955c050c3299657026c6b6a6fa240d6e7a5a8d75bb98a8f8d939a62eacd59a5d]
	I0904 21:07:40.390012  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:40.393684  776883 logs.go:123] Gathering logs for etcd [38b58b25574a591f05db66e5937b5a1fdb36ebf7572b5472d80a2e455e6c7634] ...
	I0904 21:07:40.393712  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38b58b25574a591f05db66e5937b5a1fdb36ebf7572b5472d80a2e455e6c7634"
	I0904 21:07:40.445697  776883 logs.go:123] Gathering logs for kube-scheduler [8abae600995264383324ac746e9a84d46d4e878447ec8cecb9dd3636fcf0e716] ...
	I0904 21:07:40.445733  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8abae600995264383324ac746e9a84d46d4e878447ec8cecb9dd3636fcf0e716"
	I0904 21:07:40.485079  776883 logs.go:123] Gathering logs for kindnet [955c050c3299657026c6b6a6fa240d6e7a5a8d75bb98a8f8d939a62eacd59a5d] ...
	I0904 21:07:40.485106  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 955c050c3299657026c6b6a6fa240d6e7a5a8d75bb98a8f8d939a62eacd59a5d"
	I0904 21:07:40.525194  776883 logs.go:123] Gathering logs for CRI-O ...
	I0904 21:07:40.525226  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0904 21:07:40.598148  776883 logs.go:123] Gathering logs for kubelet ...
	I0904 21:07:40.598182  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 21:07:40.693872  776883 logs.go:123] Gathering logs for dmesg ...
	I0904 21:07:40.693913  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 21:07:40.711658  776883 logs.go:123] Gathering logs for describe nodes ...
	I0904 21:07:40.711688  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 21:07:41.497152  776883 logs.go:123] Gathering logs for kube-apiserver [f1b65a4bd313096b7f38ef67386c0985463f30603d15e4b2df709afb852779f2] ...
	I0904 21:07:41.497334  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1b65a4bd313096b7f38ef67386c0985463f30603d15e4b2df709afb852779f2"
	I0904 21:07:41.570604  776883 logs.go:123] Gathering logs for kube-proxy [538900e233999cf485983ab9334a6a6f7d6d97f92ca72e7f90b0ea9cf653105a] ...
	I0904 21:07:41.570676  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 538900e233999cf485983ab9334a6a6f7d6d97f92ca72e7f90b0ea9cf653105a"
	I0904 21:07:41.623839  776883 logs.go:123] Gathering logs for kube-scheduler [1aeda06413dc144d75989cc07a38c671e89474416c91c54ff4c93c192b9647de] ...
	I0904 21:07:41.623866  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aeda06413dc144d75989cc07a38c671e89474416c91c54ff4c93c192b9647de"
	I0904 21:07:41.671454  776883 logs.go:123] Gathering logs for kube-controller-manager [a19d7db2de5227459edf78ba259765b859a484d1f8442dd3ce14e6edf876d6ad] ...
	I0904 21:07:41.671528  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a19d7db2de5227459edf78ba259765b859a484d1f8442dd3ce14e6edf876d6ad"
	I0904 21:07:41.761302  776883 logs.go:123] Gathering logs for container status ...
	I0904 21:07:41.761346  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 21:07:41.834532  776883 logs.go:123] Gathering logs for kube-apiserver [92f57fd44ad1efdf2fda7bd233dfd03521520327ce2d18e80f597972162d6466] ...
	I0904 21:07:41.834564  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92f57fd44ad1efdf2fda7bd233dfd03521520327ce2d18e80f597972162d6466"
	I0904 21:07:41.888001  776883 logs.go:123] Gathering logs for etcd [af040434c9ee343ad84b7be540821dfc19895f1fac5004fc8eaf496b492c4ec2] ...
	I0904 21:07:41.888031  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af040434c9ee343ad84b7be540821dfc19895f1fac5004fc8eaf496b492c4ec2"
	I0904 21:07:41.969092  776883 logs.go:123] Gathering logs for kube-controller-manager [dc2220b66c14d41d46fa2f2e4672cd63f0ab5f837d463ea63dae74a0c2b6f91f] ...
	I0904 21:07:41.969206  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc2220b66c14d41d46fa2f2e4672cd63f0ab5f837d463ea63dae74a0c2b6f91f"
	I0904 21:07:44.531186  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:44.539113  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 21:07:44.539140  776883 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 21:07:44.539168  776883 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0904 21:07:44.539244  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0904 21:07:44.579009  776883 cri.go:89] found id: "f1b65a4bd313096b7f38ef67386c0985463f30603d15e4b2df709afb852779f2"
	I0904 21:07:44.579031  776883 cri.go:89] found id: "92f57fd44ad1efdf2fda7bd233dfd03521520327ce2d18e80f597972162d6466"
	I0904 21:07:44.579036  776883 cri.go:89] found id: ""
	I0904 21:07:44.579043  776883 logs.go:276] 2 containers: [f1b65a4bd313096b7f38ef67386c0985463f30603d15e4b2df709afb852779f2 92f57fd44ad1efdf2fda7bd233dfd03521520327ce2d18e80f597972162d6466]
	I0904 21:07:44.579100  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:44.582864  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:44.586435  776883 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0904 21:07:44.586565  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0904 21:07:44.624328  776883 cri.go:89] found id: "38b58b25574a591f05db66e5937b5a1fdb36ebf7572b5472d80a2e455e6c7634"
	I0904 21:07:44.624393  776883 cri.go:89] found id: "af040434c9ee343ad84b7be540821dfc19895f1fac5004fc8eaf496b492c4ec2"
	I0904 21:07:44.624413  776883 cri.go:89] found id: ""
	I0904 21:07:44.624432  776883 logs.go:276] 2 containers: [38b58b25574a591f05db66e5937b5a1fdb36ebf7572b5472d80a2e455e6c7634 af040434c9ee343ad84b7be540821dfc19895f1fac5004fc8eaf496b492c4ec2]
	I0904 21:07:44.624508  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:44.628242  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:44.631819  776883 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0904 21:07:44.631893  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0904 21:07:44.669027  776883 cri.go:89] found id: ""
	I0904 21:07:44.669050  776883 logs.go:276] 0 containers: []
	W0904 21:07:44.669060  776883 logs.go:278] No container was found matching "coredns"
	I0904 21:07:44.669067  776883 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0904 21:07:44.669124  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0904 21:07:44.709338  776883 cri.go:89] found id: "8abae600995264383324ac746e9a84d46d4e878447ec8cecb9dd3636fcf0e716"
	I0904 21:07:44.709361  776883 cri.go:89] found id: "1aeda06413dc144d75989cc07a38c671e89474416c91c54ff4c93c192b9647de"
	I0904 21:07:44.709366  776883 cri.go:89] found id: ""
	I0904 21:07:44.709374  776883 logs.go:276] 2 containers: [8abae600995264383324ac746e9a84d46d4e878447ec8cecb9dd3636fcf0e716 1aeda06413dc144d75989cc07a38c671e89474416c91c54ff4c93c192b9647de]
	I0904 21:07:44.709431  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:44.713198  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:44.716409  776883 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0904 21:07:44.716478  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0904 21:07:44.757540  776883 cri.go:89] found id: "538900e233999cf485983ab9334a6a6f7d6d97f92ca72e7f90b0ea9cf653105a"
	I0904 21:07:44.757564  776883 cri.go:89] found id: ""
	I0904 21:07:44.757574  776883 logs.go:276] 1 containers: [538900e233999cf485983ab9334a6a6f7d6d97f92ca72e7f90b0ea9cf653105a]
	I0904 21:07:44.757630  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:44.761124  776883 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0904 21:07:44.761208  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0904 21:07:44.802062  776883 cri.go:89] found id: "a19d7db2de5227459edf78ba259765b859a484d1f8442dd3ce14e6edf876d6ad"
	I0904 21:07:44.802085  776883 cri.go:89] found id: "dc2220b66c14d41d46fa2f2e4672cd63f0ab5f837d463ea63dae74a0c2b6f91f"
	I0904 21:07:44.802090  776883 cri.go:89] found id: ""
	I0904 21:07:44.802097  776883 logs.go:276] 2 containers: [a19d7db2de5227459edf78ba259765b859a484d1f8442dd3ce14e6edf876d6ad dc2220b66c14d41d46fa2f2e4672cd63f0ab5f837d463ea63dae74a0c2b6f91f]
	I0904 21:07:44.802166  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:44.805908  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:44.811099  776883 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0904 21:07:44.811170  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0904 21:07:44.860817  776883 cri.go:89] found id: "955c050c3299657026c6b6a6fa240d6e7a5a8d75bb98a8f8d939a62eacd59a5d"
	I0904 21:07:44.860839  776883 cri.go:89] found id: ""
	I0904 21:07:44.860847  776883 logs.go:276] 1 containers: [955c050c3299657026c6b6a6fa240d6e7a5a8d75bb98a8f8d939a62eacd59a5d]
	I0904 21:07:44.860904  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:44.865124  776883 logs.go:123] Gathering logs for kube-proxy [538900e233999cf485983ab9334a6a6f7d6d97f92ca72e7f90b0ea9cf653105a] ...
	I0904 21:07:44.865149  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 538900e233999cf485983ab9334a6a6f7d6d97f92ca72e7f90b0ea9cf653105a"
	I0904 21:07:44.938379  776883 logs.go:123] Gathering logs for container status ...
	I0904 21:07:44.938407  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 21:07:45.090072  776883 logs.go:123] Gathering logs for kube-apiserver [f1b65a4bd313096b7f38ef67386c0985463f30603d15e4b2df709afb852779f2] ...
	I0904 21:07:45.090109  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1b65a4bd313096b7f38ef67386c0985463f30603d15e4b2df709afb852779f2"
	I0904 21:07:45.172138  776883 logs.go:123] Gathering logs for kube-apiserver [92f57fd44ad1efdf2fda7bd233dfd03521520327ce2d18e80f597972162d6466] ...
	I0904 21:07:45.172223  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92f57fd44ad1efdf2fda7bd233dfd03521520327ce2d18e80f597972162d6466"
	I0904 21:07:45.253439  776883 logs.go:123] Gathering logs for kindnet [955c050c3299657026c6b6a6fa240d6e7a5a8d75bb98a8f8d939a62eacd59a5d] ...
	I0904 21:07:45.253467  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 955c050c3299657026c6b6a6fa240d6e7a5a8d75bb98a8f8d939a62eacd59a5d"
	I0904 21:07:45.339630  776883 logs.go:123] Gathering logs for kubelet ...
	I0904 21:07:45.339658  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 21:07:45.454572  776883 logs.go:123] Gathering logs for kube-scheduler [8abae600995264383324ac746e9a84d46d4e878447ec8cecb9dd3636fcf0e716] ...
	I0904 21:07:45.454666  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8abae600995264383324ac746e9a84d46d4e878447ec8cecb9dd3636fcf0e716"
	I0904 21:07:45.509860  776883 logs.go:123] Gathering logs for etcd [af040434c9ee343ad84b7be540821dfc19895f1fac5004fc8eaf496b492c4ec2] ...
	I0904 21:07:45.509889  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af040434c9ee343ad84b7be540821dfc19895f1fac5004fc8eaf496b492c4ec2"
	I0904 21:07:45.572188  776883 logs.go:123] Gathering logs for kube-controller-manager [a19d7db2de5227459edf78ba259765b859a484d1f8442dd3ce14e6edf876d6ad] ...
	I0904 21:07:45.572231  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a19d7db2de5227459edf78ba259765b859a484d1f8442dd3ce14e6edf876d6ad"
	I0904 21:07:45.655975  776883 logs.go:123] Gathering logs for describe nodes ...
	I0904 21:07:45.656012  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 21:07:45.917975  776883 logs.go:123] Gathering logs for etcd [38b58b25574a591f05db66e5937b5a1fdb36ebf7572b5472d80a2e455e6c7634] ...
	I0904 21:07:45.918010  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38b58b25574a591f05db66e5937b5a1fdb36ebf7572b5472d80a2e455e6c7634"
	I0904 21:07:45.976278  776883 logs.go:123] Gathering logs for kube-controller-manager [dc2220b66c14d41d46fa2f2e4672cd63f0ab5f837d463ea63dae74a0c2b6f91f] ...
	I0904 21:07:45.976312  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc2220b66c14d41d46fa2f2e4672cd63f0ab5f837d463ea63dae74a0c2b6f91f"
	I0904 21:07:46.043511  776883 logs.go:123] Gathering logs for CRI-O ...
	I0904 21:07:46.043542  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0904 21:07:46.122159  776883 logs.go:123] Gathering logs for dmesg ...
	I0904 21:07:46.122195  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 21:07:46.141112  776883 logs.go:123] Gathering logs for kube-scheduler [1aeda06413dc144d75989cc07a38c671e89474416c91c54ff4c93c192b9647de] ...
	I0904 21:07:46.141142  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aeda06413dc144d75989cc07a38c671e89474416c91c54ff4c93c192b9647de"
	I0904 21:07:48.690399  776883 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 21:07:48.699377  776883 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0904 21:07:48.699495  776883 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0904 21:07:48.699509  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:48.699518  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:48.699523  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:48.712506  776883 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0904 21:07:48.712684  776883 api_server.go:141] control plane version: v1.31.0
	I0904 21:07:48.712708  776883 api_server.go:131] duration metric: took 22.661619811s to wait for apiserver health ...
	I0904 21:07:48.712717  776883 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 21:07:48.712751  776883 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0904 21:07:48.712827  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0904 21:07:48.751456  776883 cri.go:89] found id: "f1b65a4bd313096b7f38ef67386c0985463f30603d15e4b2df709afb852779f2"
	I0904 21:07:48.751477  776883 cri.go:89] found id: "92f57fd44ad1efdf2fda7bd233dfd03521520327ce2d18e80f597972162d6466"
	I0904 21:07:48.751482  776883 cri.go:89] found id: ""
	I0904 21:07:48.751490  776883 logs.go:276] 2 containers: [f1b65a4bd313096b7f38ef67386c0985463f30603d15e4b2df709afb852779f2 92f57fd44ad1efdf2fda7bd233dfd03521520327ce2d18e80f597972162d6466]
	I0904 21:07:48.751546  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:48.756081  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:48.759489  776883 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0904 21:07:48.759560  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0904 21:07:48.798707  776883 cri.go:89] found id: "38b58b25574a591f05db66e5937b5a1fdb36ebf7572b5472d80a2e455e6c7634"
	I0904 21:07:48.798732  776883 cri.go:89] found id: "af040434c9ee343ad84b7be540821dfc19895f1fac5004fc8eaf496b492c4ec2"
	I0904 21:07:48.798737  776883 cri.go:89] found id: ""
	I0904 21:07:48.798744  776883 logs.go:276] 2 containers: [38b58b25574a591f05db66e5937b5a1fdb36ebf7572b5472d80a2e455e6c7634 af040434c9ee343ad84b7be540821dfc19895f1fac5004fc8eaf496b492c4ec2]
	I0904 21:07:48.798821  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:48.802346  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:48.805651  776883 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0904 21:07:48.805742  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0904 21:07:48.845695  776883 cri.go:89] found id: ""
	I0904 21:07:48.845720  776883 logs.go:276] 0 containers: []
	W0904 21:07:48.845729  776883 logs.go:278] No container was found matching "coredns"
	I0904 21:07:48.845735  776883 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0904 21:07:48.845798  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0904 21:07:48.895557  776883 cri.go:89] found id: "8abae600995264383324ac746e9a84d46d4e878447ec8cecb9dd3636fcf0e716"
	I0904 21:07:48.895583  776883 cri.go:89] found id: "1aeda06413dc144d75989cc07a38c671e89474416c91c54ff4c93c192b9647de"
	I0904 21:07:48.895589  776883 cri.go:89] found id: ""
	I0904 21:07:48.895596  776883 logs.go:276] 2 containers: [8abae600995264383324ac746e9a84d46d4e878447ec8cecb9dd3636fcf0e716 1aeda06413dc144d75989cc07a38c671e89474416c91c54ff4c93c192b9647de]
	I0904 21:07:48.895654  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:48.899585  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:48.902978  776883 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0904 21:07:48.903058  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0904 21:07:48.947955  776883 cri.go:89] found id: "538900e233999cf485983ab9334a6a6f7d6d97f92ca72e7f90b0ea9cf653105a"
	I0904 21:07:48.948048  776883 cri.go:89] found id: ""
	I0904 21:07:48.948078  776883 logs.go:276] 1 containers: [538900e233999cf485983ab9334a6a6f7d6d97f92ca72e7f90b0ea9cf653105a]
	I0904 21:07:48.948166  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:48.951794  776883 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0904 21:07:48.951939  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0904 21:07:48.991468  776883 cri.go:89] found id: "a19d7db2de5227459edf78ba259765b859a484d1f8442dd3ce14e6edf876d6ad"
	I0904 21:07:48.991487  776883 cri.go:89] found id: "dc2220b66c14d41d46fa2f2e4672cd63f0ab5f837d463ea63dae74a0c2b6f91f"
	I0904 21:07:48.991492  776883 cri.go:89] found id: ""
	I0904 21:07:48.991499  776883 logs.go:276] 2 containers: [a19d7db2de5227459edf78ba259765b859a484d1f8442dd3ce14e6edf876d6ad dc2220b66c14d41d46fa2f2e4672cd63f0ab5f837d463ea63dae74a0c2b6f91f]
	I0904 21:07:48.991553  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:48.995116  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:48.998568  776883 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0904 21:07:48.998639  776883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0904 21:07:49.046874  776883 cri.go:89] found id: "955c050c3299657026c6b6a6fa240d6e7a5a8d75bb98a8f8d939a62eacd59a5d"
	I0904 21:07:49.046909  776883 cri.go:89] found id: ""
	I0904 21:07:49.046918  776883 logs.go:276] 1 containers: [955c050c3299657026c6b6a6fa240d6e7a5a8d75bb98a8f8d939a62eacd59a5d]
	I0904 21:07:49.046977  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:49.050789  776883 logs.go:123] Gathering logs for kube-apiserver [f1b65a4bd313096b7f38ef67386c0985463f30603d15e4b2df709afb852779f2] ...
	I0904 21:07:49.050862  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1b65a4bd313096b7f38ef67386c0985463f30603d15e4b2df709afb852779f2"
	I0904 21:07:49.104401  776883 logs.go:123] Gathering logs for etcd [af040434c9ee343ad84b7be540821dfc19895f1fac5004fc8eaf496b492c4ec2] ...
	I0904 21:07:49.104432  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af040434c9ee343ad84b7be540821dfc19895f1fac5004fc8eaf496b492c4ec2"
	I0904 21:07:49.158940  776883 logs.go:123] Gathering logs for kube-scheduler [8abae600995264383324ac746e9a84d46d4e878447ec8cecb9dd3636fcf0e716] ...
	I0904 21:07:49.158976  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8abae600995264383324ac746e9a84d46d4e878447ec8cecb9dd3636fcf0e716"
	I0904 21:07:49.196347  776883 logs.go:123] Gathering logs for kube-controller-manager [dc2220b66c14d41d46fa2f2e4672cd63f0ab5f837d463ea63dae74a0c2b6f91f] ...
	I0904 21:07:49.196376  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc2220b66c14d41d46fa2f2e4672cd63f0ab5f837d463ea63dae74a0c2b6f91f"
	I0904 21:07:49.234048  776883 logs.go:123] Gathering logs for CRI-O ...
	I0904 21:07:49.234074  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0904 21:07:49.306305  776883 logs.go:123] Gathering logs for container status ...
	I0904 21:07:49.306337  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0904 21:07:49.357430  776883 logs.go:123] Gathering logs for dmesg ...
	I0904 21:07:49.357466  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0904 21:07:49.374339  776883 logs.go:123] Gathering logs for kube-apiserver [92f57fd44ad1efdf2fda7bd233dfd03521520327ce2d18e80f597972162d6466] ...
	I0904 21:07:49.374368  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92f57fd44ad1efdf2fda7bd233dfd03521520327ce2d18e80f597972162d6466"
	I0904 21:07:49.414912  776883 logs.go:123] Gathering logs for kube-scheduler [1aeda06413dc144d75989cc07a38c671e89474416c91c54ff4c93c192b9647de] ...
	I0904 21:07:49.414943  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1aeda06413dc144d75989cc07a38c671e89474416c91c54ff4c93c192b9647de"
	I0904 21:07:49.453037  776883 logs.go:123] Gathering logs for kube-controller-manager [a19d7db2de5227459edf78ba259765b859a484d1f8442dd3ce14e6edf876d6ad] ...
	I0904 21:07:49.453069  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a19d7db2de5227459edf78ba259765b859a484d1f8442dd3ce14e6edf876d6ad"
	I0904 21:07:49.516393  776883 logs.go:123] Gathering logs for etcd [38b58b25574a591f05db66e5937b5a1fdb36ebf7572b5472d80a2e455e6c7634] ...
	I0904 21:07:49.516425  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38b58b25574a591f05db66e5937b5a1fdb36ebf7572b5472d80a2e455e6c7634"
	I0904 21:07:49.570429  776883 logs.go:123] Gathering logs for kindnet [955c050c3299657026c6b6a6fa240d6e7a5a8d75bb98a8f8d939a62eacd59a5d] ...
	I0904 21:07:49.570464  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 955c050c3299657026c6b6a6fa240d6e7a5a8d75bb98a8f8d939a62eacd59a5d"
	I0904 21:07:49.613736  776883 logs.go:123] Gathering logs for kubelet ...
	I0904 21:07:49.613818  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0904 21:07:49.692402  776883 logs.go:123] Gathering logs for describe nodes ...
	I0904 21:07:49.692437  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0904 21:07:49.939851  776883 logs.go:123] Gathering logs for kube-proxy [538900e233999cf485983ab9334a6a6f7d6d97f92ca72e7f90b0ea9cf653105a] ...
	I0904 21:07:49.939882  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 538900e233999cf485983ab9334a6a6f7d6d97f92ca72e7f90b0ea9cf653105a"
	I0904 21:07:52.489558  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0904 21:07:52.489584  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:52.489594  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:52.489599  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:52.499199  776883 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0904 21:07:52.523829  776883 system_pods.go:59] 26 kube-system pods found
	I0904 21:07:52.523875  776883 system_pods.go:61] "coredns-6f6b679f8f-ltwpt" [d3ad0e27-10ec-482a-bee3-258dbfbcb87c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 21:07:52.523883  776883 system_pods.go:61] "coredns-6f6b679f8f-qdnlw" [c6db02b7-e2ef-4d29-af36-e440ec2020f4] Running
	I0904 21:07:52.523890  776883 system_pods.go:61] "etcd-ha-067477" [781857aa-654a-40e9-8e25-ae13623caea3] Running
	I0904 21:07:52.523896  776883 system_pods.go:61] "etcd-ha-067477-m02" [edc3565c-ac8b-4995-bfda-94fe5b7ad5d3] Running
	I0904 21:07:52.523901  776883 system_pods.go:61] "etcd-ha-067477-m03" [18c89437-5f63-4ced-a639-c54b99c9cd29] Running
	I0904 21:07:52.523906  776883 system_pods.go:61] "kindnet-hbjns" [63e405ed-3f04-4cc0-af80-d4f2ddfa378a] Running
	I0904 21:07:52.523910  776883 system_pods.go:61] "kindnet-kxjl6" [4600315b-13e2-45dd-957b-143cd245e4ea] Running
	I0904 21:07:52.523915  776883 system_pods.go:61] "kindnet-ldjfl" [0a21041a-ad0e-456e-aefc-61c2b2a9db61] Running
	I0904 21:07:52.523919  776883 system_pods.go:61] "kindnet-nccgv" [cb382b64-7ac9-426a-8832-3d8d5fcea139] Running
	I0904 21:07:52.523927  776883 system_pods.go:61] "kube-apiserver-ha-067477" [070daee6-2d69-4658-9fb9-a562c403cb76] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 21:07:52.523932  776883 system_pods.go:61] "kube-apiserver-ha-067477-m02" [fac0b1c1-fe3a-4a06-94f2-263418995e82] Running
	I0904 21:07:52.523944  776883 system_pods.go:61] "kube-apiserver-ha-067477-m03" [22895f9b-c08e-4e12-b5f2-86398df9ab9f] Running
	I0904 21:07:52.523951  776883 system_pods.go:61] "kube-controller-manager-ha-067477" [5150134d-105b-45ec-b139-ba981386b0d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 21:07:52.523961  776883 system_pods.go:61] "kube-controller-manager-ha-067477-m02" [83b0f989-de3e-4668-bb7d-bc4ee0ac4b1b] Running
	I0904 21:07:52.523967  776883 system_pods.go:61] "kube-controller-manager-ha-067477-m03" [8c883b26-e93c-421f-8318-9430359845d4] Running
	I0904 21:07:52.523971  776883 system_pods.go:61] "kube-proxy-7h6l2" [5dc8c542-8bc6-4aea-94d0-0ff59bd1b6ef] Running
	I0904 21:07:52.523975  776883 system_pods.go:61] "kube-proxy-9c6g4" [84ec5af8-b201-4d83-ae00-dcaf147d530a] Running
	I0904 21:07:52.523979  776883 system_pods.go:61] "kube-proxy-n8z9c" [ce1deaad-ce7e-483e-982e-6cbf292f7458] Running
	I0904 21:07:52.523988  776883 system_pods.go:61] "kube-proxy-v2r5c" [cba0714e-dd8e-42ba-b6e6-f1f0890d0b7b] Running
	I0904 21:07:52.523992  776883 system_pods.go:61] "kube-scheduler-ha-067477" [9dc8c10a-5883-422e-b1d9-dbc7bbb25120] Running
	I0904 21:07:52.523996  776883 system_pods.go:61] "kube-scheduler-ha-067477-m02" [d6ee12c9-aeae-4f7b-99ad-ae94d2d280a2] Running
	I0904 21:07:52.523999  776883 system_pods.go:61] "kube-scheduler-ha-067477-m03" [5f0bce5f-4081-4180-8343-b1972edfb4b3] Running
	I0904 21:07:52.524003  776883 system_pods.go:61] "kube-vip-ha-067477" [3d33a394-6ae0-434f-bfdb-316bddb3a443] Running
	I0904 21:07:52.524007  776883 system_pods.go:61] "kube-vip-ha-067477-m02" [037f6744-b7e2-4c59-9da9-3100b37a95ab] Running
	I0904 21:07:52.524011  776883 system_pods.go:61] "kube-vip-ha-067477-m03" [94033ecd-52a1-4d5a-8b7f-a8050391079c] Running
	I0904 21:07:52.524020  776883 system_pods.go:61] "storage-provisioner" [e11ca9ba-bc18-41a6-a4dc-5a836e426eb2] Running
	I0904 21:07:52.524026  776883 system_pods.go:74] duration metric: took 3.811292375s to wait for pod list to return data ...
	I0904 21:07:52.524038  776883 default_sa.go:34] waiting for default service account to be created ...
	I0904 21:07:52.524139  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0904 21:07:52.524150  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:52.524159  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:52.524163  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:52.539185  776883 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0904 21:07:52.539578  776883 default_sa.go:45] found service account: "default"
	I0904 21:07:52.539602  776883 default_sa.go:55] duration metric: took 15.557636ms for default service account to be created ...
	I0904 21:07:52.539613  776883 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 21:07:52.539680  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0904 21:07:52.539691  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:52.539699  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:52.539703  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:52.546283  776883 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0904 21:07:52.556688  776883 system_pods.go:86] 26 kube-system pods found
	I0904 21:07:52.556733  776883 system_pods.go:89] "coredns-6f6b679f8f-ltwpt" [d3ad0e27-10ec-482a-bee3-258dbfbcb87c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 21:07:52.556745  776883 system_pods.go:89] "coredns-6f6b679f8f-qdnlw" [c6db02b7-e2ef-4d29-af36-e440ec2020f4] Running
	I0904 21:07:52.556752  776883 system_pods.go:89] "etcd-ha-067477" [781857aa-654a-40e9-8e25-ae13623caea3] Running
	I0904 21:07:52.556758  776883 system_pods.go:89] "etcd-ha-067477-m02" [edc3565c-ac8b-4995-bfda-94fe5b7ad5d3] Running
	I0904 21:07:52.556762  776883 system_pods.go:89] "etcd-ha-067477-m03" [18c89437-5f63-4ced-a639-c54b99c9cd29] Running
	I0904 21:07:52.556767  776883 system_pods.go:89] "kindnet-hbjns" [63e405ed-3f04-4cc0-af80-d4f2ddfa378a] Running
	I0904 21:07:52.556772  776883 system_pods.go:89] "kindnet-kxjl6" [4600315b-13e2-45dd-957b-143cd245e4ea] Running
	I0904 21:07:52.556781  776883 system_pods.go:89] "kindnet-ldjfl" [0a21041a-ad0e-456e-aefc-61c2b2a9db61] Running
	I0904 21:07:52.556785  776883 system_pods.go:89] "kindnet-nccgv" [cb382b64-7ac9-426a-8832-3d8d5fcea139] Running
	I0904 21:07:52.556799  776883 system_pods.go:89] "kube-apiserver-ha-067477" [070daee6-2d69-4658-9fb9-a562c403cb76] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 21:07:52.556806  776883 system_pods.go:89] "kube-apiserver-ha-067477-m02" [fac0b1c1-fe3a-4a06-94f2-263418995e82] Running
	I0904 21:07:52.556819  776883 system_pods.go:89] "kube-apiserver-ha-067477-m03" [22895f9b-c08e-4e12-b5f2-86398df9ab9f] Running
	I0904 21:07:52.556827  776883 system_pods.go:89] "kube-controller-manager-ha-067477" [5150134d-105b-45ec-b139-ba981386b0d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 21:07:52.556832  776883 system_pods.go:89] "kube-controller-manager-ha-067477-m02" [83b0f989-de3e-4668-bb7d-bc4ee0ac4b1b] Running
	I0904 21:07:52.556842  776883 system_pods.go:89] "kube-controller-manager-ha-067477-m03" [8c883b26-e93c-421f-8318-9430359845d4] Running
	I0904 21:07:52.556846  776883 system_pods.go:89] "kube-proxy-7h6l2" [5dc8c542-8bc6-4aea-94d0-0ff59bd1b6ef] Running
	I0904 21:07:52.556851  776883 system_pods.go:89] "kube-proxy-9c6g4" [84ec5af8-b201-4d83-ae00-dcaf147d530a] Running
	I0904 21:07:52.556855  776883 system_pods.go:89] "kube-proxy-n8z9c" [ce1deaad-ce7e-483e-982e-6cbf292f7458] Running
	I0904 21:07:52.556859  776883 system_pods.go:89] "kube-proxy-v2r5c" [cba0714e-dd8e-42ba-b6e6-f1f0890d0b7b] Running
	I0904 21:07:52.556864  776883 system_pods.go:89] "kube-scheduler-ha-067477" [9dc8c10a-5883-422e-b1d9-dbc7bbb25120] Running
	I0904 21:07:52.556868  776883 system_pods.go:89] "kube-scheduler-ha-067477-m02" [d6ee12c9-aeae-4f7b-99ad-ae94d2d280a2] Running
	I0904 21:07:52.556875  776883 system_pods.go:89] "kube-scheduler-ha-067477-m03" [5f0bce5f-4081-4180-8343-b1972edfb4b3] Running
	I0904 21:07:52.556879  776883 system_pods.go:89] "kube-vip-ha-067477" [3d33a394-6ae0-434f-bfdb-316bddb3a443] Running
	I0904 21:07:52.556885  776883 system_pods.go:89] "kube-vip-ha-067477-m02" [037f6744-b7e2-4c59-9da9-3100b37a95ab] Running
	I0904 21:07:52.556889  776883 system_pods.go:89] "kube-vip-ha-067477-m03" [94033ecd-52a1-4d5a-8b7f-a8050391079c] Running
	I0904 21:07:52.556898  776883 system_pods.go:89] "storage-provisioner" [e11ca9ba-bc18-41a6-a4dc-5a836e426eb2] Running
	I0904 21:07:52.556904  776883 system_pods.go:126] duration metric: took 17.281872ms to wait for k8s-apps to be running ...
	I0904 21:07:52.556912  776883 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 21:07:52.556979  776883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:07:52.572511  776883 system_svc.go:56] duration metric: took 15.589061ms WaitForService to wait for kubelet
	I0904 21:07:52.572542  776883 kubeadm.go:582] duration metric: took 1m12.635733019s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 21:07:52.572565  776883 node_conditions.go:102] verifying NodePressure condition ...
	I0904 21:07:52.572642  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0904 21:07:52.572652  776883 round_trippers.go:469] Request Headers:
	I0904 21:07:52.572660  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:07:52.572665  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:07:52.576256  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:07:52.577975  776883 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0904 21:07:52.578011  776883 node_conditions.go:123] node cpu capacity is 2
	I0904 21:07:52.578031  776883 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0904 21:07:52.578037  776883 node_conditions.go:123] node cpu capacity is 2
	I0904 21:07:52.578041  776883 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0904 21:07:52.578046  776883 node_conditions.go:123] node cpu capacity is 2
	I0904 21:07:52.578050  776883 node_conditions.go:105] duration metric: took 5.48063ms to run NodePressure ...
	I0904 21:07:52.578067  776883 start.go:241] waiting for startup goroutines ...
	I0904 21:07:52.578099  776883 start.go:255] writing updated cluster config ...
	I0904 21:07:52.581091  776883 out.go:201] 
	I0904 21:07:52.583950  776883 config.go:182] Loaded profile config "ha-067477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:07:52.584073  776883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/config.json ...
	I0904 21:07:52.587075  776883 out.go:177] * Starting "ha-067477-m04" worker node in "ha-067477" cluster
	I0904 21:07:52.590327  776883 cache.go:121] Beginning downloading kic base image for docker with crio
	I0904 21:07:52.593031  776883 out.go:177] * Pulling base image v0.0.45 ...
	I0904 21:07:52.595591  776883 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 21:07:52.595633  776883 cache.go:56] Caching tarball of preloaded images
	I0904 21:07:52.595682  776883 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0904 21:07:52.595743  776883 preload.go:172] Found /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0904 21:07:52.595753  776883 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0904 21:07:52.595888  776883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/config.json ...
	W0904 21:07:52.616047  776883 image.go:95] image gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 is of wrong architecture
	I0904 21:07:52.616070  776883 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0904 21:07:52.616219  776883 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0904 21:07:52.616242  776883 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0904 21:07:52.616250  776883 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0904 21:07:52.616260  776883 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0904 21:07:52.616270  776883 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from local cache
	I0904 21:07:52.617642  776883 image.go:273] response: 
	I0904 21:07:52.794955  776883 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from cached tarball
	I0904 21:07:52.794998  776883 cache.go:194] Successfully downloaded all kic artifacts
	I0904 21:07:52.795029  776883 start.go:360] acquireMachinesLock for ha-067477-m04: {Name:mk9a9ef54b1f1578afe686e76304e40135a21e98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 21:07:52.795095  776883 start.go:364] duration metric: took 43.06µs to acquireMachinesLock for "ha-067477-m04"
	I0904 21:07:52.795118  776883 start.go:96] Skipping create...Using existing machine configuration
	I0904 21:07:52.795124  776883 fix.go:54] fixHost starting: m04
	I0904 21:07:52.795394  776883 cli_runner.go:164] Run: docker container inspect ha-067477-m04 --format={{.State.Status}}
	I0904 21:07:52.814310  776883 fix.go:112] recreateIfNeeded on ha-067477-m04: state=Stopped err=<nil>
	W0904 21:07:52.814339  776883 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 21:07:52.817247  776883 out.go:177] * Restarting existing docker container for "ha-067477-m04" ...
	I0904 21:07:52.819877  776883 cli_runner.go:164] Run: docker start ha-067477-m04
	I0904 21:07:53.208821  776883 cli_runner.go:164] Run: docker container inspect ha-067477-m04 --format={{.State.Status}}
	I0904 21:07:53.232775  776883 kic.go:430] container "ha-067477-m04" state is running.
	I0904 21:07:53.233126  776883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-067477-m04
	I0904 21:07:53.256797  776883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/config.json ...
	I0904 21:07:53.257042  776883 machine.go:93] provisionDockerMachine start ...
	I0904 21:07:53.257154  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m04
	I0904 21:07:53.278145  776883 main.go:141] libmachine: Using SSH client type: native
	I0904 21:07:53.279758  776883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33599 <nil> <nil>}
	I0904 21:07:53.279804  776883 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 21:07:53.280400  776883 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54498->127.0.0.1:33599: read: connection reset by peer
	I0904 21:07:56.409148  776883 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-067477-m04
	
	I0904 21:07:56.409172  776883 ubuntu.go:169] provisioning hostname "ha-067477-m04"
	I0904 21:07:56.409236  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m04
	I0904 21:07:56.427176  776883 main.go:141] libmachine: Using SSH client type: native
	I0904 21:07:56.427416  776883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33599 <nil> <nil>}
	I0904 21:07:56.427432  776883 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-067477-m04 && echo "ha-067477-m04" | sudo tee /etc/hostname
	I0904 21:07:56.570601  776883 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-067477-m04
	
	I0904 21:07:56.570765  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m04
	I0904 21:07:56.599586  776883 main.go:141] libmachine: Using SSH client type: native
	I0904 21:07:56.599899  776883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33599 <nil> <nil>}
	I0904 21:07:56.599939  776883 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-067477-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-067477-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-067477-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 21:07:56.726850  776883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 21:07:56.726943  776883 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19575-710603/.minikube CaCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19575-710603/.minikube}
	I0904 21:07:56.726975  776883 ubuntu.go:177] setting up certificates
	I0904 21:07:56.727006  776883 provision.go:84] configureAuth start
	I0904 21:07:56.727083  776883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-067477-m04
	I0904 21:07:56.744910  776883 provision.go:143] copyHostCerts
	I0904 21:07:56.744950  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem
	I0904 21:07:56.744986  776883 exec_runner.go:144] found /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem, removing ...
	I0904 21:07:56.744992  776883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem
	I0904 21:07:56.745067  776883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/key.pem (1675 bytes)
	I0904 21:07:56.745145  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem
	I0904 21:07:56.745162  776883 exec_runner.go:144] found /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem, removing ...
	I0904 21:07:56.745167  776883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem
	I0904 21:07:56.745193  776883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/ca.pem (1082 bytes)
	I0904 21:07:56.745272  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem
	I0904 21:07:56.745288  776883 exec_runner.go:144] found /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem, removing ...
	I0904 21:07:56.745292  776883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem
	I0904 21:07:56.745320  776883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19575-710603/.minikube/cert.pem (1123 bytes)
	I0904 21:07:56.745365  776883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem org=jenkins.ha-067477-m04 san=[127.0.0.1 192.168.49.5 ha-067477-m04 localhost minikube]
	I0904 21:07:57.739864  776883 provision.go:177] copyRemoteCerts
	I0904 21:07:57.739937  776883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 21:07:57.739981  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m04
	I0904 21:07:57.768640  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477-m04/id_rsa Username:docker}
	I0904 21:07:57.863067  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0904 21:07:57.863134  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 21:07:57.893911  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0904 21:07:57.893977  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 21:07:57.919204  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0904 21:07:57.919314  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0904 21:07:57.945080  776883 provision.go:87] duration metric: took 1.218048712s to configureAuth
	I0904 21:07:57.945106  776883 ubuntu.go:193] setting minikube options for container-runtime
	I0904 21:07:57.945334  776883 config.go:182] Loaded profile config "ha-067477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:07:57.945435  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m04
	I0904 21:07:57.962542  776883 main.go:141] libmachine: Using SSH client type: native
	I0904 21:07:57.962782  776883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33599 <nil> <nil>}
	I0904 21:07:57.962796  776883 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 21:07:58.255535  776883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 21:07:58.255616  776883 machine.go:96] duration metric: took 4.998557341s to provisionDockerMachine
	I0904 21:07:58.255642  776883 start.go:293] postStartSetup for "ha-067477-m04" (driver="docker")
	I0904 21:07:58.255671  776883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 21:07:58.255779  776883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 21:07:58.255849  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m04
	I0904 21:07:58.274037  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477-m04/id_rsa Username:docker}
	I0904 21:07:58.363630  776883 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 21:07:58.367610  776883 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 21:07:58.367657  776883 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 21:07:58.367668  776883 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 21:07:58.367678  776883 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0904 21:07:58.367689  776883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-710603/.minikube/addons for local assets ...
	I0904 21:07:58.367779  776883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-710603/.minikube/files for local assets ...
	I0904 21:07:58.367880  776883 filesync.go:149] local asset: /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem -> 7159812.pem in /etc/ssl/certs
	I0904 21:07:58.367891  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem -> /etc/ssl/certs/7159812.pem
	I0904 21:07:58.368031  776883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 21:07:58.378031  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem --> /etc/ssl/certs/7159812.pem (1708 bytes)
	I0904 21:07:58.404449  776883 start.go:296] duration metric: took 148.774795ms for postStartSetup
	I0904 21:07:58.404624  776883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:07:58.404739  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m04
	I0904 21:07:58.425189  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477-m04/id_rsa Username:docker}
	I0904 21:07:58.511777  776883 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 21:07:58.517043  776883 fix.go:56] duration metric: took 5.721911089s for fixHost
	I0904 21:07:58.517070  776883 start.go:83] releasing machines lock for "ha-067477-m04", held for 5.721962886s
	I0904 21:07:58.517163  776883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-067477-m04
	I0904 21:07:58.537026  776883 out.go:177] * Found network options:
	I0904 21:07:58.538980  776883 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0904 21:07:58.541484  776883 proxy.go:119] fail to check proxy env: Error ip not in block
	W0904 21:07:58.541516  776883 proxy.go:119] fail to check proxy env: Error ip not in block
	W0904 21:07:58.541548  776883 proxy.go:119] fail to check proxy env: Error ip not in block
	W0904 21:07:58.541562  776883 proxy.go:119] fail to check proxy env: Error ip not in block
	I0904 21:07:58.541635  776883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 21:07:58.541683  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m04
	I0904 21:07:58.542039  776883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 21:07:58.542096  776883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m04
	I0904 21:07:58.563953  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477-m04/id_rsa Username:docker}
	I0904 21:07:58.573975  776883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33599 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477-m04/id_rsa Username:docker}
	I0904 21:07:58.839003  776883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 21:07:58.843628  776883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 21:07:58.852949  776883 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 21:07:58.853024  776883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 21:07:58.862478  776883 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 21:07:58.862553  776883 start.go:495] detecting cgroup driver to use...
	I0904 21:07:58.862592  776883 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 21:07:58.862647  776883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 21:07:58.876739  776883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 21:07:58.888500  776883 docker.go:217] disabling cri-docker service (if available) ...
	I0904 21:07:58.888566  776883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 21:07:58.903437  776883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 21:07:58.916936  776883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 21:07:59.027683  776883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 21:07:59.157785  776883 docker.go:233] disabling docker service ...
	I0904 21:07:59.157940  776883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 21:07:59.173539  776883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 21:07:59.187093  776883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 21:07:59.281381  776883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 21:07:59.373481  776883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 21:07:59.385893  776883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 21:07:59.409602  776883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0904 21:07:59.409766  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:07:59.420551  776883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 21:07:59.420679  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:07:59.433268  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:07:59.445200  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:07:59.458489  776883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 21:07:59.476085  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:07:59.487738  776883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:07:59.497825  776883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 21:07:59.508192  776883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 21:07:59.517124  776883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 21:07:59.525936  776883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:07:59.622740  776883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 21:07:59.744303  776883 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 21:07:59.744378  776883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 21:07:59.749071  776883 start.go:563] Will wait 60s for crictl version
	I0904 21:07:59.749134  776883 ssh_runner.go:195] Run: which crictl
	I0904 21:07:59.752648  776883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 21:07:59.802278  776883 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 21:07:59.802442  776883 ssh_runner.go:195] Run: crio --version
	I0904 21:07:59.846378  776883 ssh_runner.go:195] Run: crio --version
	I0904 21:07:59.889738  776883 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0904 21:07:59.892098  776883 out.go:177]   - env NO_PROXY=192.168.49.2
	I0904 21:07:59.894054  776883 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0904 21:07:59.896422  776883 cli_runner.go:164] Run: docker network inspect ha-067477 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 21:07:59.911761  776883 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 21:07:59.915916  776883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 21:07:59.927104  776883 mustload.go:65] Loading cluster: ha-067477
	I0904 21:07:59.927355  776883 config.go:182] Loaded profile config "ha-067477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:07:59.927618  776883 cli_runner.go:164] Run: docker container inspect ha-067477 --format={{.State.Status}}
	I0904 21:07:59.944679  776883 host.go:66] Checking if "ha-067477" exists ...
	I0904 21:07:59.944958  776883 certs.go:68] Setting up /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477 for IP: 192.168.49.5
	I0904 21:07:59.944971  776883 certs.go:194] generating shared ca certs ...
	I0904 21:07:59.944986  776883 certs.go:226] acquiring lock for ca certs: {Name:mkc3a04cbc0797b819dd3c9fec2eaef93961640b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 21:07:59.945108  776883 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key
	I0904 21:07:59.945160  776883 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key
	I0904 21:07:59.945175  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0904 21:07:59.945195  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0904 21:07:59.945212  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0904 21:07:59.945227  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0904 21:07:59.945283  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/715981.pem (1338 bytes)
	W0904 21:07:59.945323  776883 certs.go:480] ignoring /home/jenkins/minikube-integration/19575-710603/.minikube/certs/715981_empty.pem, impossibly tiny 0 bytes
	I0904 21:07:59.945336  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 21:07:59.945363  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/ca.pem (1082 bytes)
	I0904 21:07:59.945392  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/cert.pem (1123 bytes)
	I0904 21:07:59.945418  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/key.pem (1675 bytes)
	I0904 21:07:59.945463  776883 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem (1708 bytes)
	I0904 21:07:59.945498  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem -> /usr/share/ca-certificates/7159812.pem
	I0904 21:07:59.945516  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:07:59.945530  776883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19575-710603/.minikube/certs/715981.pem -> /usr/share/ca-certificates/715981.pem
	I0904 21:07:59.945554  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 21:07:59.980404  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 21:08:00.032216  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 21:08:00.169703  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 21:08:00.241799  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/ssl/certs/7159812.pem --> /usr/share/ca-certificates/7159812.pem (1708 bytes)
	I0904 21:08:00.474107  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 21:08:00.559273  776883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-710603/.minikube/certs/715981.pem --> /usr/share/ca-certificates/715981.pem (1338 bytes)
	I0904 21:08:00.627402  776883 ssh_runner.go:195] Run: openssl version
	I0904 21:08:00.638650  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7159812.pem && ln -fs /usr/share/ca-certificates/7159812.pem /etc/ssl/certs/7159812.pem"
	I0904 21:08:00.653278  776883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7159812.pem
	I0904 21:08:00.661496  776883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 20:54 /usr/share/ca-certificates/7159812.pem
	I0904 21:08:00.661683  776883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7159812.pem
	I0904 21:08:00.675667  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7159812.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 21:08:00.688529  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 21:08:00.701720  776883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:08:00.706782  776883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:08:00.706886  776883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 21:08:00.715546  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 21:08:00.729222  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/715981.pem && ln -fs /usr/share/ca-certificates/715981.pem /etc/ssl/certs/715981.pem"
	I0904 21:08:00.742594  776883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/715981.pem
	I0904 21:08:00.746966  776883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 20:54 /usr/share/ca-certificates/715981.pem
	I0904 21:08:00.747043  776883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/715981.pem
	I0904 21:08:00.755877  776883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/715981.pem /etc/ssl/certs/51391683.0"
	I0904 21:08:00.770300  776883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 21:08:00.780031  776883 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 21:08:00.780099  776883 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.0  false true} ...
	I0904 21:08:00.780272  776883 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-067477-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-067477 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 21:08:00.780508  776883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0904 21:08:00.792831  776883 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 21:08:00.792975  776883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0904 21:08:00.805174  776883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0904 21:08:00.827643  776883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 21:08:00.848472  776883 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0904 21:08:00.852518  776883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 21:08:00.864564  776883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:08:00.968776  776883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 21:08:00.983421  776883 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0904 21:08:00.983852  776883 config.go:182] Loaded profile config "ha-067477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:08:00.986563  776883 out.go:177] * Verifying Kubernetes components...
	I0904 21:08:00.988364  776883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 21:08:01.096573  776883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 21:08:01.110096  776883 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 21:08:01.110430  776883 kapi.go:59] client config for ha-067477: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/client.crt", KeyFile:"/home/jenkins/minikube-integration/19575-710603/.minikube/profiles/ha-067477/client.key", CAFile:"/home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cba20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0904 21:08:01.110499  776883 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0904 21:08:01.110721  776883 node_ready.go:35] waiting up to 6m0s for node "ha-067477-m04" to be "Ready" ...
	I0904 21:08:01.110801  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m04
	I0904 21:08:01.110811  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:01.110820  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:01.110826  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:01.113703  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:01.114463  776883 node_ready.go:49] node "ha-067477-m04" has status "Ready":"True"
	I0904 21:08:01.114486  776883 node_ready.go:38] duration metric: took 3.747058ms for node "ha-067477-m04" to be "Ready" ...
	I0904 21:08:01.114496  776883 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 21:08:01.114602  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0904 21:08:01.114616  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:01.114624  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:01.114628  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:01.120723  776883 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0904 21:08:01.131499  776883 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:01.131621  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:01.131647  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:01.131668  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:01.131681  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:01.135039  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:01.135773  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:01.135794  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:01.135803  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:01.135807  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:01.138778  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:01.632542  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:01.632618  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:01.632634  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:01.632640  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:01.635739  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:01.636544  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:01.636594  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:01.636610  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:01.636617  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:01.639463  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:02.132518  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:02.132537  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:02.132547  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:02.132552  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:02.135520  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:02.136356  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:02.136377  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:02.136386  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:02.136390  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:02.139435  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:02.632674  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:02.632698  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:02.632708  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:02.632713  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:02.635601  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:02.636426  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:02.636449  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:02.636459  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:02.636462  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:02.639223  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:03.132135  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:03.132164  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:03.132174  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:03.132178  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:03.136232  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:08:03.137067  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:03.137088  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:03.137098  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:03.137102  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:03.139995  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:03.140950  776883 pod_ready.go:103] pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace has status "Ready":"False"
	I0904 21:08:03.632182  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:03.632205  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:03.632216  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:03.632220  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:03.635333  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:03.636269  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:03.636295  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:03.636305  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:03.636309  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:03.638995  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:04.132420  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:04.132447  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:04.132469  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:04.132475  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:04.136133  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:04.137368  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:04.137392  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:04.137408  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:04.137413  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:04.140052  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:04.632347  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:04.632373  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:04.632384  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:04.632388  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:04.635518  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:04.636245  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:04.636266  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:04.636276  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:04.636281  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:04.638944  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:05.132625  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:05.132653  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:05.132664  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:05.132670  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:05.136203  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:05.136990  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:05.137010  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:05.137020  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:05.137032  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:05.143367  776883 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0904 21:08:05.144031  776883 pod_ready.go:103] pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace has status "Ready":"False"
	I0904 21:08:05.632077  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:05.632104  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:05.632118  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:05.632122  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:05.635145  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:05.635821  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:05.635843  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:05.635852  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:05.635859  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:05.638843  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:06.132395  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:06.132421  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:06.132430  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:06.132434  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:06.135443  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:06.136128  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:06.136140  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:06.136148  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:06.136153  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:06.146749  776883 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0904 21:08:06.631758  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:06.631784  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:06.631794  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:06.631797  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:06.634883  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:06.635725  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:06.635748  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:06.635758  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:06.635763  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:06.638292  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:07.132509  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:07.132536  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:07.132546  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:07.132551  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:07.135356  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:07.136167  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:07.136188  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:07.136197  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:07.136202  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:07.138866  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:07.632046  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:07.632072  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:07.632082  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:07.632086  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:07.634979  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:07.635926  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:07.635948  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:07.635958  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:07.635964  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:07.638641  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:07.639327  776883 pod_ready.go:103] pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace has status "Ready":"False"
	I0904 21:08:08.132647  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:08.132672  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:08.132682  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:08.132687  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:08.135861  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:08.136706  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:08.136728  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:08.136737  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:08.136743  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:08.139407  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:08.632328  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:08.632354  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:08.632364  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:08.632369  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:08.635472  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:08.636148  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:08.636166  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:08.636175  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:08.636180  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:08.638978  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:09.131858  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:09.131887  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:09.131897  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:09.131901  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:09.136141  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:08:09.137613  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:09.137641  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:09.137660  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:09.137679  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:09.141660  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:09.631743  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:09.631770  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:09.631780  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:09.631784  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:09.635698  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:09.637117  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:09.637139  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:09.637183  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:09.637195  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:09.640692  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:09.641896  776883 pod_ready.go:103] pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace has status "Ready":"False"
	I0904 21:08:10.132657  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:10.132688  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:10.132698  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:10.132702  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:10.136310  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:10.137767  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:10.137795  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:10.137805  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:10.137817  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:10.141002  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:10.631680  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:10.631710  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:10.631720  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:10.631726  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:10.635846  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:08:10.637139  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:10.637164  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:10.637173  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:10.637182  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:10.640570  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:11.132005  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:11.132031  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:11.132041  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:11.132046  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:11.134743  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:11.135626  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:11.135661  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:11.135678  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:11.135684  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:11.140645  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:08:11.631715  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:11.631774  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:11.631784  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:11.631794  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:11.635710  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:11.636709  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:11.636729  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:11.636739  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:11.636744  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:11.639307  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:12.131806  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:12.131832  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:12.131842  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:12.131847  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:12.146842  776883 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0904 21:08:12.148995  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:12.149023  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:12.149033  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:12.149037  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:12.156267  776883 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0904 21:08:12.157313  776883 pod_ready.go:103] pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace has status "Ready":"False"
	I0904 21:08:12.631731  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:12.631756  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:12.631766  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:12.631771  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:12.634927  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:12.636041  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:12.636076  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:12.636086  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:12.636090  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:12.638830  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:13.132491  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:13.132521  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:13.132534  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:13.132546  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:13.135666  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:13.136482  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:13.136502  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:13.136512  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:13.136519  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:13.139209  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:13.631813  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:13.631855  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:13.631866  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:13.631877  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:13.635080  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:13.635923  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:13.635946  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:13.635956  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:13.635963  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:13.638686  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:14.132682  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:14.132707  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:14.132717  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:14.132722  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:14.136036  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:14.136772  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:14.136792  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:14.136802  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:14.136806  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:14.139663  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:14.631980  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:14.632006  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:14.632015  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:14.632019  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:14.635092  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:14.636252  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:14.636272  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:14.636281  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:14.636285  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:14.638996  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:14.639766  776883 pod_ready.go:103] pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace has status "Ready":"False"
	I0904 21:08:15.131810  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:15.131852  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:15.131863  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:15.131868  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:15.135327  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:15.136252  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:15.136277  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:15.136286  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:15.136291  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:15.140048  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:15.632399  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:15.632424  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:15.632434  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:15.632439  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:15.635486  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:15.636591  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:15.636613  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:15.636623  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:15.636628  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:15.639295  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:16.132549  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:16.132572  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:16.132580  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:16.132586  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:16.135527  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:16.136837  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:16.136856  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:16.136865  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:16.136869  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:16.142472  776883 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0904 21:08:16.632364  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:16.632390  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:16.632398  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:16.632404  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:16.635457  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:16.636353  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:16.636375  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:16.636383  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:16.636386  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:16.639015  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:17.132219  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:17.132250  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:17.132260  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:17.132264  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:17.135029  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:17.135746  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:17.135762  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:17.135771  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:17.135776  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:17.138180  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:17.138781  776883 pod_ready.go:103] pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace has status "Ready":"False"
	I0904 21:08:17.632551  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:17.632633  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:17.632651  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:17.632656  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:17.636021  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:17.637531  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:17.637552  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:17.637561  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:17.637566  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:17.640275  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:18.131733  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:18.131779  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:18.131789  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:18.131793  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:18.134966  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:18.135892  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:18.135913  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:18.135923  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:18.135927  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:18.138706  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:18.631732  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-ltwpt
	I0904 21:08:18.631766  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:18.631776  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:18.631782  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:18.670359  776883 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0904 21:08:18.671203  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:18.671221  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:18.671237  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:18.671247  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:18.676301  776883 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0904 21:08:18.676891  776883 pod_ready.go:98] node "ha-067477" hosting pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:18.676921  776883 pod_ready.go:82] duration metric: took 17.545390413s for pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace to be "Ready" ...
	E0904 21:08:18.676933  776883 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-067477" hosting pod "coredns-6f6b679f8f-ltwpt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:18.676940  776883 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qdnlw" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:18.677014  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qdnlw
	I0904 21:08:18.677025  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:18.677033  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:18.677037  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:18.686819  776883 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0904 21:08:18.687558  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:18.687583  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:18.687592  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:18.687597  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:18.691248  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:18.691862  776883 pod_ready.go:98] node "ha-067477" hosting pod "coredns-6f6b679f8f-qdnlw" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:18.691883  776883 pod_ready.go:82] duration metric: took 14.935899ms for pod "coredns-6f6b679f8f-qdnlw" in "kube-system" namespace to be "Ready" ...
	E0904 21:08:18.691893  776883 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-067477" hosting pod "coredns-6f6b679f8f-qdnlw" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:18.691903  776883 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-067477" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:18.691970  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-067477
	I0904 21:08:18.691981  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:18.691990  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:18.692005  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:18.694361  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:18.695026  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:18.695052  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:18.695060  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:18.695066  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:18.697633  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:18.698217  776883 pod_ready.go:98] node "ha-067477" hosting pod "etcd-ha-067477" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:18.698247  776883 pod_ready.go:82] duration metric: took 6.335487ms for pod "etcd-ha-067477" in "kube-system" namespace to be "Ready" ...
	E0904 21:08:18.698258  776883 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-067477" hosting pod "etcd-ha-067477" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:18.698269  776883 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:18.698349  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-067477-m02
	I0904 21:08:18.698360  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:18.698368  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:18.698378  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:18.705558  776883 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0904 21:08:18.706267  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:08:18.706285  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:18.706295  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:18.706299  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:18.710605  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:08:18.711641  776883 pod_ready.go:93] pod "etcd-ha-067477-m02" in "kube-system" namespace has status "Ready":"True"
	I0904 21:08:18.711665  776883 pod_ready.go:82] duration metric: took 13.382104ms for pod "etcd-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:18.711677  776883 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:18.711748  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-067477-m03
	I0904 21:08:18.711766  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:18.711774  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:18.711778  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:18.718442  776883 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0904 21:08:18.718578  776883 pod_ready.go:98] error getting pod "etcd-ha-067477-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-067477-m03" not found
	I0904 21:08:18.718605  776883 pod_ready.go:82] duration metric: took 6.904459ms for pod "etcd-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	E0904 21:08:18.718622  776883 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "etcd-ha-067477-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-067477-m03" not found
	I0904 21:08:18.718644  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-067477" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:18.718722  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-067477
	I0904 21:08:18.718734  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:18.718742  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:18.718760  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:18.721240  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:18.832573  776883 request.go:632] Waited for 110.273748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:18.832659  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:18.832666  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:18.832680  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:18.832693  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:18.835957  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:18.836547  776883 pod_ready.go:98] node "ha-067477" hosting pod "kube-apiserver-ha-067477" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:18.836571  776883 pod_ready.go:82] duration metric: took 117.914978ms for pod "kube-apiserver-ha-067477" in "kube-system" namespace to be "Ready" ...
	E0904 21:08:18.836582  776883 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-067477" hosting pod "kube-apiserver-ha-067477" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:18.836591  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:19.031916  776883 request.go:632] Waited for 195.246991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-067477-m02
	I0904 21:08:19.032082  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-067477-m02
	I0904 21:08:19.032119  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:19.032148  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:19.032171  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:19.036310  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:08:19.231982  776883 request.go:632] Waited for 194.697547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:08:19.232062  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:08:19.232074  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:19.232082  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:19.232086  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:19.235117  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:19.235699  776883 pod_ready.go:93] pod "kube-apiserver-ha-067477-m02" in "kube-system" namespace has status "Ready":"True"
	I0904 21:08:19.235721  776883 pod_ready.go:82] duration metric: took 399.11636ms for pod "kube-apiserver-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:19.235734  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:19.432512  776883 request.go:632] Waited for 196.694359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-067477-m03
	I0904 21:08:19.432605  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-067477-m03
	I0904 21:08:19.432616  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:19.432623  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:19.432627  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:19.435497  776883 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0904 21:08:19.435624  776883 pod_ready.go:98] error getting pod "kube-apiserver-ha-067477-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-067477-m03" not found
	I0904 21:08:19.435640  776883 pod_ready.go:82] duration metric: took 199.898865ms for pod "kube-apiserver-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	E0904 21:08:19.435655  776883 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-ha-067477-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-067477-m03" not found
	I0904 21:08:19.435665  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-067477" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:19.632102  776883 request.go:632] Waited for 196.358417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-067477
	I0904 21:08:19.632194  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-067477
	I0904 21:08:19.632205  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:19.632217  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:19.632227  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:19.635225  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:19.832239  776883 request.go:632] Waited for 196.329576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:19.832317  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:19.832323  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:19.832332  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:19.832342  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:19.835246  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:19.836072  776883 pod_ready.go:98] node "ha-067477" hosting pod "kube-controller-manager-ha-067477" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:19.836112  776883 pod_ready.go:82] duration metric: took 400.435501ms for pod "kube-controller-manager-ha-067477" in "kube-system" namespace to be "Ready" ...
	E0904 21:08:19.836127  776883 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-067477" hosting pod "kube-controller-manager-ha-067477" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:19.836135  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:20.072173  776883 request.go:632] Waited for 235.951799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-067477-m02
	I0904 21:08:20.072277  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-067477-m02
	I0904 21:08:20.072299  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:20.072340  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:20.072359  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:20.078055  776883 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0904 21:08:20.232672  776883 request.go:632] Waited for 153.325875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:08:20.232738  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:08:20.232744  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:20.232753  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:20.232761  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:20.235813  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:20.236630  776883 pod_ready.go:93] pod "kube-controller-manager-ha-067477-m02" in "kube-system" namespace has status "Ready":"True"
	I0904 21:08:20.236653  776883 pod_ready.go:82] duration metric: took 400.50736ms for pod "kube-controller-manager-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:20.236666  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:20.431940  776883 request.go:632] Waited for 195.18511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-067477-m03
	I0904 21:08:20.432018  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-067477-m03
	I0904 21:08:20.432026  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:20.432036  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:20.432050  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:20.434701  776883 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0904 21:08:20.434902  776883 pod_ready.go:98] error getting pod "kube-controller-manager-ha-067477-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-067477-m03" not found
	I0904 21:08:20.434951  776883 pod_ready.go:82] duration metric: took 198.276888ms for pod "kube-controller-manager-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	E0904 21:08:20.434976  776883 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-ha-067477-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-067477-m03" not found
	I0904 21:08:20.435004  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7h6l2" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:20.632440  776883 request.go:632] Waited for 197.347088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:08:20.632547  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7h6l2
	I0904 21:08:20.632578  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:20.632593  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:20.632599  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:20.635523  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:20.832634  776883 request.go:632] Waited for 196.333359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:08:20.832694  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:08:20.832701  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:20.832716  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:20.832724  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:20.835613  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:20.836283  776883 pod_ready.go:93] pod "kube-proxy-7h6l2" in "kube-system" namespace has status "Ready":"True"
	I0904 21:08:20.836304  776883 pod_ready.go:82] duration metric: took 401.273981ms for pod "kube-proxy-7h6l2" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:20.836318  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9c6g4" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:21.036875  776883 request.go:632] Waited for 200.452781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9c6g4
	I0904 21:08:21.036977  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9c6g4
	I0904 21:08:21.036989  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:21.036997  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:21.037006  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:21.041168  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:08:21.231858  776883 request.go:632] Waited for 189.252601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m04
	I0904 21:08:21.231992  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m04
	I0904 21:08:21.232022  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:21.232044  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:21.232082  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:21.237515  776883 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0904 21:08:21.432109  776883 request.go:632] Waited for 95.236798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9c6g4
	I0904 21:08:21.432170  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9c6g4
	I0904 21:08:21.432177  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:21.432186  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:21.432196  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:21.435047  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:21.632293  776883 request.go:632] Waited for 196.38528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m04
	I0904 21:08:21.632352  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m04
	I0904 21:08:21.632358  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:21.632366  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:21.632377  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:21.635209  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:21.836665  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9c6g4
	I0904 21:08:21.836743  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:21.836771  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:21.836793  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:21.840102  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:22.032290  776883 request.go:632] Waited for 191.221885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m04
	I0904 21:08:22.032430  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m04
	I0904 21:08:22.032465  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:22.032489  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:22.032513  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:22.036039  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:22.337176  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9c6g4
	I0904 21:08:22.337201  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:22.337212  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:22.337215  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:22.340074  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:22.432027  776883 request.go:632] Waited for 91.131684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m04
	I0904 21:08:22.432110  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m04
	I0904 21:08:22.432117  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:22.432128  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:22.432143  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:22.434804  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:22.435662  776883 pod_ready.go:93] pod "kube-proxy-9c6g4" in "kube-system" namespace has status "Ready":"True"
	I0904 21:08:22.435689  776883 pod_ready.go:82] duration metric: took 1.59936355s for pod "kube-proxy-9c6g4" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:22.435701  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n8z9c" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:22.632111  776883 request.go:632] Waited for 196.343353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:08:22.632220  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8z9c
	I0904 21:08:22.632232  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:22.632243  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:22.632258  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:22.635025  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:22.832236  776883 request.go:632] Waited for 196.361314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:22.832324  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:22.832371  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:22.832386  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:22.832391  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:22.835341  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:22.836115  776883 pod_ready.go:98] node "ha-067477" hosting pod "kube-proxy-n8z9c" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:22.836144  776883 pod_ready.go:82] duration metric: took 400.435625ms for pod "kube-proxy-n8z9c" in "kube-system" namespace to be "Ready" ...
	E0904 21:08:22.836155  776883 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-067477" hosting pod "kube-proxy-n8z9c" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:22.836162  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v2r5c" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:23.032747  776883 request.go:632] Waited for 196.508666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v2r5c
	I0904 21:08:23.032824  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v2r5c
	I0904 21:08:23.032831  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:23.032839  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:23.032843  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:23.036026  776883 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0904 21:08:23.036150  776883 pod_ready.go:98] error getting pod "kube-proxy-v2r5c" in "kube-system" namespace (skipping!): pods "kube-proxy-v2r5c" not found
	I0904 21:08:23.036172  776883 pod_ready.go:82] duration metric: took 199.993469ms for pod "kube-proxy-v2r5c" in "kube-system" namespace to be "Ready" ...
	E0904 21:08:23.036183  776883 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-proxy-v2r5c" in "kube-system" namespace (skipping!): pods "kube-proxy-v2r5c" not found
	I0904 21:08:23.036194  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-067477" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:23.232646  776883 request.go:632] Waited for 196.372456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-067477
	I0904 21:08:23.232715  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-067477
	I0904 21:08:23.232726  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:23.232735  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:23.232748  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:23.235848  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:23.431762  776883 request.go:632] Waited for 195.279181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:23.431842  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477
	I0904 21:08:23.431900  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:23.431910  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:23.431913  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:23.436715  776883 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0904 21:08:23.437967  776883 pod_ready.go:98] node "ha-067477" hosting pod "kube-scheduler-ha-067477" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:23.438033  776883 pod_ready.go:82] duration metric: took 401.82931ms for pod "kube-scheduler-ha-067477" in "kube-system" namespace to be "Ready" ...
	E0904 21:08:23.438060  776883 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-067477" hosting pod "kube-scheduler-ha-067477" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-067477" has status "Ready":"Unknown"
	I0904 21:08:23.438082  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:23.631818  776883 request.go:632] Waited for 193.63221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-067477-m02
	I0904 21:08:23.631889  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-067477-m02
	I0904 21:08:23.631902  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:23.631911  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:23.631918  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:23.634985  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:23.831873  776883 request.go:632] Waited for 196.246075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:08:23.831933  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-067477-m02
	I0904 21:08:23.831944  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:23.831953  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:23.831960  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:23.834854  776883 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0904 21:08:23.835618  776883 pod_ready.go:93] pod "kube-scheduler-ha-067477-m02" in "kube-system" namespace has status "Ready":"True"
	I0904 21:08:23.835644  776883 pod_ready.go:82] duration metric: took 397.530281ms for pod "kube-scheduler-ha-067477-m02" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:23.835656  776883 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	I0904 21:08:24.033230  776883 request.go:632] Waited for 197.498567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-067477-m03
	I0904 21:08:24.033305  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-067477-m03
	I0904 21:08:24.033312  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:24.033320  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:24.033324  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:24.053024  776883 round_trippers.go:574] Response Status: 404 Not Found in 19 milliseconds
	I0904 21:08:24.053157  776883 pod_ready.go:98] error getting pod "kube-scheduler-ha-067477-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-067477-m03" not found
	I0904 21:08:24.053176  776883 pod_ready.go:82] duration metric: took 217.513129ms for pod "kube-scheduler-ha-067477-m03" in "kube-system" namespace to be "Ready" ...
	E0904 21:08:24.053189  776883 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-ha-067477-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-067477-m03" not found
	I0904 21:08:24.053199  776883 pod_ready.go:39] duration metric: took 22.938674392s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 21:08:24.053218  776883 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 21:08:24.053276  776883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:08:24.070980  776883 system_svc.go:56] duration metric: took 17.752844ms WaitForService to wait for kubelet
	I0904 21:08:24.071012  776883 kubeadm.go:582] duration metric: took 23.087548681s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 21:08:24.071038  776883 node_conditions.go:102] verifying NodePressure condition ...
	I0904 21:08:24.232587  776883 request.go:632] Waited for 161.470159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0904 21:08:24.232647  776883 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0904 21:08:24.232660  776883 round_trippers.go:469] Request Headers:
	I0904 21:08:24.232669  776883 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0904 21:08:24.232686  776883 round_trippers.go:473]     Accept: application/json, */*
	I0904 21:08:24.235911  776883 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0904 21:08:24.237465  776883 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0904 21:08:24.237492  776883 node_conditions.go:123] node cpu capacity is 2
	I0904 21:08:24.237504  776883 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0904 21:08:24.237509  776883 node_conditions.go:123] node cpu capacity is 2
	I0904 21:08:24.237513  776883 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0904 21:08:24.237518  776883 node_conditions.go:123] node cpu capacity is 2
	I0904 21:08:24.237522  776883 node_conditions.go:105] duration metric: took 166.479317ms to run NodePressure ...
	I0904 21:08:24.237536  776883 start.go:241] waiting for startup goroutines ...
	I0904 21:08:24.237558  776883 start.go:255] writing updated cluster config ...
	I0904 21:08:24.237938  776883 ssh_runner.go:195] Run: rm -f paused
	I0904 21:08:24.298617  776883 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0904 21:08:24.301661  776883 out.go:177] * Done! kubectl is now configured to use "ha-067477" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 21:07:44 ha-067477 crio[647]: time="2024-09-04 21:07:44.903340767Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5417b35bbaa0cfd77ed15d55ec1e311c0caf4aa73a75ac6f1aea7545fb4832a4/merged/etc/group: no such file or directory"
	Sep 04 21:07:44 ha-067477 crio[647]: time="2024-09-04 21:07:44.994927674Z" level=info msg="Created container c7b87e4605e056ae1365210ab0442b60415de27ef0c1bc2aeeee907fedebd338: kube-system/kube-vip-ha-067477/kube-vip" id=ddff183e-6edb-4b36-a0b1-e4128138abb2 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 04 21:07:44 ha-067477 crio[647]: time="2024-09-04 21:07:44.995789340Z" level=info msg="Starting container: c7b87e4605e056ae1365210ab0442b60415de27ef0c1bc2aeeee907fedebd338" id=385ed616-3b7b-4e32-a2b8-dc93ba189a34 name=/runtime.v1.RuntimeService/StartContainer
	Sep 04 21:07:45 ha-067477 crio[647]: time="2024-09-04 21:07:45.027308722Z" level=info msg="Started container" PID=1803 containerID=c7b87e4605e056ae1365210ab0442b60415de27ef0c1bc2aeeee907fedebd338 description=kube-system/kube-vip-ha-067477/kube-vip id=385ed616-3b7b-4e32-a2b8-dc93ba189a34 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fd5c0063f4caf45dea892104eb5fd5f4c02b5416029a0d2946397fcdf8340bf
	Sep 04 21:08:05 ha-067477 crio[647]: time="2024-09-04 21:08:05.183925722Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 04 21:08:05 ha-067477 crio[647]: time="2024-09-04 21:08:05.188333523Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 04 21:08:05 ha-067477 crio[647]: time="2024-09-04 21:08:05.188370642Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 04 21:08:05 ha-067477 crio[647]: time="2024-09-04 21:08:05.188392894Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 04 21:08:05 ha-067477 crio[647]: time="2024-09-04 21:08:05.193309886Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 04 21:08:05 ha-067477 crio[647]: time="2024-09-04 21:08:05.193342993Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 04 21:08:05 ha-067477 crio[647]: time="2024-09-04 21:08:05.193360223Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 04 21:08:05 ha-067477 crio[647]: time="2024-09-04 21:08:05.198739219Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 04 21:08:05 ha-067477 crio[647]: time="2024-09-04 21:08:05.198780597Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 04 21:08:05 ha-067477 crio[647]: time="2024-09-04 21:08:05.198819849Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 04 21:08:05 ha-067477 crio[647]: time="2024-09-04 21:08:05.203908398Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 04 21:08:05 ha-067477 crio[647]: time="2024-09-04 21:08:05.203941776Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 04 21:08:08 ha-067477 crio[647]: time="2024-09-04 21:08:08.648161578Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=a6690a11-c33e-47bd-8e78-ba792ffe8824 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:08:08 ha-067477 crio[647]: time="2024-09-04 21:08:08.649092658Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=a6690a11-c33e-47bd-8e78-ba792ffe8824 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:08:08 ha-067477 crio[647]: time="2024-09-04 21:08:08.649728467Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=4680742f-4426-444e-a598-224c001599ae name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:08:08 ha-067477 crio[647]: time="2024-09-04 21:08:08.650188970Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=4680742f-4426-444e-a598-224c001599ae name=/runtime.v1.ImageService/ImageStatus
	Sep 04 21:08:08 ha-067477 crio[647]: time="2024-09-04 21:08:08.650985571Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-067477/kube-controller-manager" id=53a6491b-d1f2-416f-9a28-99b648b86ed2 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 04 21:08:08 ha-067477 crio[647]: time="2024-09-04 21:08:08.651082398Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 04 21:08:08 ha-067477 crio[647]: time="2024-09-04 21:08:08.723238060Z" level=info msg="Created container 4adf4dd4d0c413bb083a3c08b6156aa5a6241a2060de2dfedf3d9e358ae40c0c: kube-system/kube-controller-manager-ha-067477/kube-controller-manager" id=53a6491b-d1f2-416f-9a28-99b648b86ed2 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 04 21:08:08 ha-067477 crio[647]: time="2024-09-04 21:08:08.723945375Z" level=info msg="Starting container: 4adf4dd4d0c413bb083a3c08b6156aa5a6241a2060de2dfedf3d9e358ae40c0c" id=0dbf76d0-501c-46ff-ac95-ad90b400427a name=/runtime.v1.RuntimeService/StartContainer
	Sep 04 21:08:08 ha-067477 crio[647]: time="2024-09-04 21:08:08.735367910Z" level=info msg="Started container" PID=1887 containerID=4adf4dd4d0c413bb083a3c08b6156aa5a6241a2060de2dfedf3d9e358ae40c0c description=kube-system/kube-controller-manager-ha-067477/kube-controller-manager id=0dbf76d0-501c-46ff-ac95-ad90b400427a name=/runtime.v1.RuntimeService/StartContainer sandboxID=5aaf5e3e0244dcb2b30221f934bff85becac682d7504564a7c59b57bb71b0f85
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4adf4dd4d0c41       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd   17 seconds ago       Running             kube-controller-manager   6                   5aaf5e3e0244d       kube-controller-manager-ha-067477
	c7b87e4605e05       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   41 seconds ago       Running             kube-vip                  3                   5fd5c0063f4ca       kube-vip-ha-067477
	3303d8bfa826f       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388   45 seconds ago       Running             kube-apiserver            4                   eeacf1522ea27       kube-apiserver-ha-067477
	2fffe963fa224       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   58 seconds ago       Running             storage-provisioner       3                   cbbfadddce5cd       storage-provisioner
	efb6704dfe3e0       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   59 seconds ago       Running             coredns                   2                   c967511cae31e       coredns-6f6b679f8f-qdnlw
	f94689f879375       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   c29f3439f121d       busybox-7dff88458-qfnx7
	b6705b402bee6       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89   About a minute ago   Running             kube-proxy                2                   b176dfe8d817f       kube-proxy-n8z9c
	adeb68ce4de36       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   About a minute ago   Running             kindnet-cni               2                   9b1f15e10f764       kindnet-nccgv
	28415fcbcd266       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   7984c9a88f6b6       coredns-6f6b679f8f-ltwpt
	072baffd21df1       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd   About a minute ago   Exited              kube-controller-manager   5                   5aaf5e3e0244d       kube-controller-manager-ha-067477
	457b109fb78d2       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388   About a minute ago   Exited              kube-apiserver            3                   eeacf1522ea27       kube-apiserver-ha-067477
	c67b0c8f89281       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   About a minute ago   Exited              kube-vip                  2                   5fd5c0063f4ca       kube-vip-ha-067477
	181f987c18b91       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb   About a minute ago   Running             kube-scheduler            2                   4e4a3073156a3       kube-scheduler-ha-067477
	5c2f5d4afeedd       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   About a minute ago   Running             etcd                      2                   aa63310e7ebac       etcd-ha-067477
	
	
	==> coredns [28415fcbcd266a82605426515b42d28cbfe4dcba0d2a581990dd9cfe1ebfa341] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33205 - 33024 "HINFO IN 3921927131486495959.529190464812844119. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025592952s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[696674830]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Sep-2024 21:07:22.922) (total time: 30001ms):
	Trace[696674830]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (21:07:52.922)
	Trace[696674830]: [30.001000556s] [30.001000556s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[36103736]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Sep-2024 21:07:22.922) (total time: 30001ms):
	Trace[36103736]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (21:07:52.923)
	Trace[36103736]: [30.001397643s] [30.001397643s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[696963378]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Sep-2024 21:07:22.921) (total time: 30001ms):
	Trace[696963378]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (21:07:52.922)
	Trace[696963378]: [30.001816349s] [30.001816349s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [efb6704dfe3e0e82675f7c6a686421b43c3dbc586dd45a996c30bf9c59662382] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53918 - 7830 "HINFO IN 4809180978571551483.9020013929066208943. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041264579s
	
	
	==> describe nodes <==
	Name:               ha-067477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-067477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af
	                    minikube.k8s.io/name=ha-067477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_04T20_58_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Sep 2024 20:58:04 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-067477
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Sep 2024 21:07:34 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 04 Sep 2024 21:07:04 +0000   Wed, 04 Sep 2024 21:08:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 04 Sep 2024 21:07:04 +0000   Wed, 04 Sep 2024 21:08:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 04 Sep 2024 21:07:04 +0000   Wed, 04 Sep 2024 21:08:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 04 Sep 2024 21:07:04 +0000   Wed, 04 Sep 2024 21:08:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-067477
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 357ca3831bad425f94a67e9b027ad9de
	  System UUID:                2f984e9e-b856-4486-ab7a-253e6e9dd164
	  Boot ID:                    02fc5889-82d8-42f6-b649-9c13bcf74bdb
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qfnx7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  kube-system                 coredns-6f6b679f8f-ltwpt             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 coredns-6f6b679f8f-qdnlw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-ha-067477                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-nccgv                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-067477             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-067477    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-n8z9c                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-067477             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-067477                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 60s                    kube-proxy       
	  Normal   Starting                 4m13s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node ha-067477 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node ha-067477 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node ha-067477 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-067477 event: Registered Node ha-067477 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-067477 status is now: NodeReady
	  Normal   RegisteredNode           9m45s                  node-controller  Node ha-067477 event: Registered Node ha-067477 in Controller
	  Normal   RegisteredNode           8m30s                  node-controller  Node ha-067477 event: Registered Node ha-067477 in Controller
	  Normal   RegisteredNode           5m56s                  node-controller  Node ha-067477 event: Registered Node ha-067477 in Controller
	  Normal   NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node ha-067477 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node ha-067477 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m13s (x7 over 5m13s)  kubelet          Node ha-067477 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 5m13s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 5m13s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m34s                  node-controller  Node ha-067477 event: Registered Node ha-067477 in Controller
	  Normal   RegisteredNode           4m31s                  node-controller  Node ha-067477 event: Registered Node ha-067477 in Controller
	  Normal   RegisteredNode           3m53s                  node-controller  Node ha-067477 event: Registered Node ha-067477 in Controller
	  Normal   Starting                 2m                     kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m                     kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m (x8 over 2m)        kubelet          Node ha-067477 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m (x8 over 2m)        kubelet          Node ha-067477 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m (x7 over 2m)        kubelet          Node ha-067477 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           63s                    node-controller  Node ha-067477 event: Registered Node ha-067477 in Controller
	  Normal   RegisteredNode           14s                    node-controller  Node ha-067477 event: Registered Node ha-067477 in Controller
	  Normal   NodeNotReady             8s                     node-controller  Node ha-067477 status is now: NodeNotReady
	
	
	Name:               ha-067477-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-067477-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af
	                    minikube.k8s.io/name=ha-067477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_04T20_58_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Sep 2024 20:58:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-067477-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Sep 2024 21:08:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Sep 2024 21:07:05 +0000   Wed, 04 Sep 2024 20:58:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Sep 2024 21:07:05 +0000   Wed, 04 Sep 2024 20:58:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Sep 2024 21:07:05 +0000   Wed, 04 Sep 2024 20:58:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Sep 2024 21:07:05 +0000   Wed, 04 Sep 2024 20:59:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-067477-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 b26f9a3216d44d73ad57e0606a717d2f
	  System UUID:                95596c57-3301-4a34-8a72-727eeaa7f9cf
	  Boot ID:                    02fc5889-82d8-42f6-b649-9c13bcf74bdb
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pm8jj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  kube-system                 etcd-ha-067477-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9m54s
	  kube-system                 kindnet-hbjns                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m55s
	  kube-system                 kube-apiserver-ha-067477-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 kube-controller-manager-ha-067477-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 kube-proxy-7h6l2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 kube-scheduler-ha-067477-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 kube-vip-ha-067477-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m49s                  kube-proxy       
	  Normal   Starting                 67s                    kube-proxy       
	  Normal   Starting                 4m16s                  kube-proxy       
	  Normal   Starting                 6m1s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  9m55s (x8 over 9m55s)  kubelet          Node ha-067477-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m55s (x8 over 9m55s)  kubelet          Node ha-067477-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m55s (x7 over 9m55s)  kubelet          Node ha-067477-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m51s                  node-controller  Node ha-067477-m02 event: Registered Node ha-067477-m02 in Controller
	  Normal   RegisteredNode           9m45s                  node-controller  Node ha-067477-m02 event: Registered Node ha-067477-m02 in Controller
	  Normal   RegisteredNode           8m30s                  node-controller  Node ha-067477-m02 event: Registered Node ha-067477-m02 in Controller
	  Normal   NodeHasSufficientPID     6m25s (x7 over 6m25s)  kubelet          Node ha-067477-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m25s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m25s (x8 over 6m25s)  kubelet          Node ha-067477-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m25s (x8 over 6m25s)  kubelet          Node ha-067477-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           5m56s                  node-controller  Node ha-067477-m02 event: Registered Node ha-067477-m02 in Controller
	  Warning  CgroupV1                 5m11s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 5m11s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m10s (x8 over 5m10s)  kubelet          Node ha-067477-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m10s (x8 over 5m10s)  kubelet          Node ha-067477-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m10s (x7 over 5m10s)  kubelet          Node ha-067477-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m34s                  node-controller  Node ha-067477-m02 event: Registered Node ha-067477-m02 in Controller
	  Normal   RegisteredNode           4m31s                  node-controller  Node ha-067477-m02 event: Registered Node ha-067477-m02 in Controller
	  Normal   RegisteredNode           3m53s                  node-controller  Node ha-067477-m02 event: Registered Node ha-067477-m02 in Controller
	  Normal   Starting                 118s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 118s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  118s (x8 over 118s)    kubelet          Node ha-067477-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    118s (x8 over 118s)    kubelet          Node ha-067477-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     118s (x7 over 118s)    kubelet          Node ha-067477-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           63s                    node-controller  Node ha-067477-m02 event: Registered Node ha-067477-m02 in Controller
	  Normal   RegisteredNode           14s                    node-controller  Node ha-067477-m02 event: Registered Node ha-067477-m02 in Controller
	
	
	Name:               ha-067477-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-067477-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af
	                    minikube.k8s.io/name=ha-067477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_04T21_01_04_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Sep 2024 21:01:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-067477-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Sep 2024 21:08:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Sep 2024 21:08:06 +0000   Wed, 04 Sep 2024 21:08:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Sep 2024 21:08:06 +0000   Wed, 04 Sep 2024 21:08:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Sep 2024 21:08:06 +0000   Wed, 04 Sep 2024 21:08:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Sep 2024 21:08:06 +0000   Wed, 04 Sep 2024 21:08:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-067477-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 64e3744a66b14c1ab82d410b4671d8cf
	  System UUID:                a6c48751-fa15-4bce-9b07-13c11c4f9d31
	  Boot ID:                    02fc5889-82d8-42f6-b649-9c13bcf74bdb
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ddvww    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 kindnet-ldjfl              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m24s
	  kube-system                 kube-proxy-9c6g4           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m58s                  kube-proxy       
	  Normal   Starting                 5s                     kube-proxy       
	  Normal   Starting                 7m20s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    7m24s (x2 over 7m24s)  kubelet          Node ha-067477-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m24s (x2 over 7m24s)  kubelet          Node ha-067477-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  7m24s (x2 over 7m24s)  kubelet          Node ha-067477-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           7m22s                  node-controller  Node ha-067477-m04 event: Registered Node ha-067477-m04 in Controller
	  Normal   RegisteredNode           7m21s                  node-controller  Node ha-067477-m04 event: Registered Node ha-067477-m04 in Controller
	  Normal   RegisteredNode           7m21s                  node-controller  Node ha-067477-m04 event: Registered Node ha-067477-m04 in Controller
	  Normal   NodeReady                7m8s                   kubelet          Node ha-067477-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m57s                  node-controller  Node ha-067477-m04 event: Registered Node ha-067477-m04 in Controller
	  Normal   RegisteredNode           4m35s                  node-controller  Node ha-067477-m04 event: Registered Node ha-067477-m04 in Controller
	  Normal   RegisteredNode           4m32s                  node-controller  Node ha-067477-m04 event: Registered Node ha-067477-m04 in Controller
	  Normal   NodeNotReady             3m55s                  node-controller  Node ha-067477-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m54s                  node-controller  Node ha-067477-m04 event: Registered Node ha-067477-m04 in Controller
	  Normal   Starting                 3m33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m33s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     3m26s (x7 over 3m33s)  kubelet          Node ha-067477-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    3m20s (x8 over 3m33s)  kubelet          Node ha-067477-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  3m20s (x8 over 3m33s)  kubelet          Node ha-067477-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           64s                    node-controller  Node ha-067477-m04 event: Registered Node ha-067477-m04 in Controller
	  Normal   Starting                 34s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     27s (x7 over 33s)      kubelet          Node ha-067477-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             24s                    node-controller  Node ha-067477-m04 status is now: NodeNotReady
	  Normal   NodeHasNoDiskPressure    21s (x8 over 33s)      kubelet          Node ha-067477-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  21s (x8 over 33s)      kubelet          Node ha-067477-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           15s                    node-controller  Node ha-067477-m04 event: Registered Node ha-067477-m04 in Controller
	
	
	==> dmesg <==
	[Sep 4 20:07] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[Sep 4 20:31] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [5c2f5d4afeedde4908030293989083eb15088b19111a2961c4fd3c344503e822] <==
	{"level":"info","ts":"2024-09-04T21:06:59.687172Z","caller":"traceutil/trace.go:171","msg":"trace[894486286] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; }","duration":"7.523859293s","start":"2024-09-04T21:06:52.163309Z","end":"2024-09-04T21:06:59.687168Z","steps":["trace[894486286] 'agreement among raft nodes before linearized reading'  (duration: 7.52384738s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T21:06:59.687185Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-04T21:06:52.163247Z","time spent":"7.523934008s","remote":"127.0.0.1:36132","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:500 "}
	{"level":"warn","ts":"2024-09-04T21:06:59.687203Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.530242951s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-04T21:06:59.687214Z","caller":"traceutil/trace.go:171","msg":"trace[2108842703] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; }","duration":"7.530254881s","start":"2024-09-04T21:06:52.156956Z","end":"2024-09-04T21:06:59.687211Z","steps":["trace[2108842703] 'agreement among raft nodes before linearized reading'  (duration: 7.530243091s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T21:06:59.687226Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-04T21:06:52.156881Z","time spent":"7.53034164s","remote":"127.0.0.1:36228","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:500 "}
	{"level":"warn","ts":"2024-09-04T21:06:59.687245Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.546554095s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/\" range_end:\"/registry/configmaps/kube-system0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-04T21:06:59.687256Z","caller":"traceutil/trace.go:171","msg":"trace[159740119] range","detail":"{range_begin:/registry/configmaps/kube-system/; range_end:/registry/configmaps/kube-system0; }","duration":"7.54656632s","start":"2024-09-04T21:06:52.140686Z","end":"2024-09-04T21:06:59.687253Z","steps":["trace[159740119] 'agreement among raft nodes before linearized reading'  (duration: 7.546554275s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T21:06:59.687269Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-04T21:06:52.140610Z","time spent":"7.546654728s","remote":"127.0.0.1:35894","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/kube-system/\" range_end:\"/registry/configmaps/kube-system0\" limit:500 "}
	{"level":"warn","ts":"2024-09-04T21:06:59.687286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.903173451s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-04T21:06:59.687299Z","caller":"traceutil/trace.go:171","msg":"trace[955996371] range","detail":"{range_begin:/registry/clusterroles; range_end:; }","duration":"7.903184346s","start":"2024-09-04T21:06:51.784109Z","end":"2024-09-04T21:06:59.687294Z","steps":["trace[955996371] 'agreement among raft nodes before linearized reading'  (duration: 7.903173294s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T21:06:59.687312Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-04T21:06:51.784096Z","time spent":"7.903211751s","remote":"127.0.0.1:36132","response type":"/etcdserverpb.KV/Range","request count":0,"request size":26,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles\" limit:1 "}
	{"level":"warn","ts":"2024-09-04T21:06:59.687329Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.903304049s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-04T21:06:59.687339Z","caller":"traceutil/trace.go:171","msg":"trace[1568153188] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; }","duration":"7.903314682s","start":"2024-09-04T21:06:51.784021Z","end":"2024-09-04T21:06:59.687336Z","steps":["trace[1568153188] 'agreement among raft nodes before linearized reading'  (duration: 7.903304082s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T21:06:59.687351Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-04T21:06:51.783980Z","time spent":"7.903368244s","remote":"127.0.0.1:36158","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"warn","ts":"2024-09-04T21:06:59.687368Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.178979559s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/apiserver-lyxvccc6c3kj4iy76zopmvzcyu\" ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-04T21:06:59.687378Z","caller":"traceutil/trace.go:171","msg":"trace[893267306] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-lyxvccc6c3kj4iy76zopmvzcyu; range_end:; }","duration":"8.178990283s","start":"2024-09-04T21:06:51.508385Z","end":"2024-09-04T21:06:59.687375Z","steps":["trace[893267306] 'agreement among raft nodes before linearized reading'  (duration: 8.178979576s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T21:06:59.687390Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-04T21:06:51.508344Z","time spent":"8.179042656s","remote":"127.0.0.1:36044","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/leases/kube-system/apiserver-lyxvccc6c3kj4iy76zopmvzcyu\" "}
	{"level":"warn","ts":"2024-09-04T21:06:59.687409Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.431489395s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-067477-m02\" ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-04T21:06:59.687420Z","caller":"traceutil/trace.go:171","msg":"trace[1664800874] range","detail":"{range_begin:/registry/minions/ha-067477-m02; range_end:; }","duration":"8.431500267s","start":"2024-09-04T21:06:51.255915Z","end":"2024-09-04T21:06:59.687416Z","steps":["trace[1664800874] 'agreement among raft nodes before linearized reading'  (duration: 8.43148974s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T21:06:59.687431Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-04T21:06:51.255882Z","time spent":"8.431545648s","remote":"127.0.0.1:35964","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":0,"response size":0,"request content":"key:\"/registry/minions/ha-067477-m02\" "}
	{"level":"info","ts":"2024-09-04T21:06:59.703768Z","caller":"etcdserver/v3_server.go:912","msg":"first commit in current term: resending ReadIndex request"}
	{"level":"warn","ts":"2024-09-04T21:06:59.713745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"478.768042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T21:06:59.713802Z","caller":"traceutil/trace.go:171","msg":"trace[1347764836] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:2480; }","duration":"478.839672ms","start":"2024-09-04T21:06:59.234950Z","end":"2024-09-04T21:06:59.713790Z","steps":["trace[1347764836] 'agreement among raft nodes before linearized reading'  (duration: 478.729192ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T21:06:59.713830Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-04T21:06:59.234914Z","time spent":"478.90906ms","remote":"127.0.0.1:35826","response type":"/etcdserverpb.KV/Range","request count":0,"request size":121,"response count":0,"response size":29,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 "}
	
	
	==> kernel <==
	 21:08:27 up  4:50,  0 users,  load average: 3.81, 2.62, 1.93
	Linux ha-067477 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [adeb68ce4de365124f1feb3d48389796f2cb55a047716d880704c99770c755c7] <==
	E0904 21:07:55.185132       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0904 21:07:55.185099       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0904 21:07:56.584597       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0904 21:07:56.584635       1 metrics.go:61] Registering metrics
	I0904 21:07:56.584699       1 controller.go:374] Syncing nftables rules
	I0904 21:08:05.183600       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:08:05.183679       1 main.go:299] handling current node
	I0904 21:08:05.187543       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0904 21:08:05.187576       1 main.go:322] Node ha-067477-m02 has CIDR [10.244.1.0/24] 
	I0904 21:08:05.187730       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	I0904 21:08:05.187799       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0904 21:08:05.187807       1 main.go:322] Node ha-067477-m04 has CIDR [10.244.3.0/24] 
	I0904 21:08:05.187843       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0904 21:08:15.188201       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0904 21:08:15.188243       1 main.go:322] Node ha-067477-m02 has CIDR [10.244.1.0/24] 
	I0904 21:08:15.188395       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0904 21:08:15.188410       1 main.go:322] Node ha-067477-m04 has CIDR [10.244.3.0/24] 
	I0904 21:08:15.188459       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:08:15.188566       1 main.go:299] handling current node
	I0904 21:08:25.183901       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0904 21:08:25.184046       1 main.go:322] Node ha-067477-m04 has CIDR [10.244.3.0/24] 
	I0904 21:08:25.184216       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0904 21:08:25.184260       1 main.go:299] handling current node
	I0904 21:08:25.184298       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0904 21:08:25.184333       1 main.go:322] Node ha-067477-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3303d8bfa826fffe5b914cceb2c3981280bc01a223ae878ae553565ded9e88ea] <==
	I0904 21:07:43.947377       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0904 21:07:43.947430       1 aggregator.go:169] waiting for initial CRD sync...
	I0904 21:07:44.040328       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0904 21:07:44.041242       1 shared_informer.go:320] Caches are synced for configmaps
	I0904 21:07:44.052838       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0904 21:07:44.052932       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0904 21:07:44.052965       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0904 21:07:44.053942       1 aggregator.go:171] initial CRD sync complete...
	I0904 21:07:44.054022       1 autoregister_controller.go:144] Starting autoregister controller
	I0904 21:07:44.054054       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0904 21:07:44.141249       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0904 21:07:44.141388       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0904 21:07:44.141400       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0904 21:07:44.141493       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0904 21:07:44.148138       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0904 21:07:44.154819       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0904 21:07:44.156642       1 cache.go:39] Caches are synced for autoregister controller
	I0904 21:07:44.156804       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0904 21:07:44.162992       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0904 21:07:44.163024       1 policy_source.go:224] refreshing policies
	I0904 21:07:44.212057       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0904 21:07:44.960812       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0904 21:07:45.570721       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0904 21:07:45.573497       1 controller.go:615] quota admission added evaluator for: endpoints
	I0904 21:07:45.585568       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [457b109fb78d221ef5ecf1f454e194e56bba831006bf0d7bce13be9a287584eb] <==
	E0904 21:06:59.702950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: etcdserver: leader changed" logger="UnhandledError"
	W0904 21:06:59.702960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Role: etcdserver: leader changed
	E0904 21:06:59.702967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Role: failed to list *v1.Role: etcdserver: leader changed" logger="UnhandledError"
	W0904 21:06:59.710154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: etcdserver: leader changed
	E0904 21:06:59.710207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: etcdserver: leader changed" logger="UnhandledError"
	I0904 21:06:59.791595       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0904 21:07:01.174938       1 shared_informer.go:320] Caches are synced for configmaps
	I0904 21:07:01.421770       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0904 21:07:01.421954       1 aggregator.go:171] initial CRD sync complete...
	I0904 21:07:01.422017       1 autoregister_controller.go:144] Starting autoregister controller
	I0904 21:07:01.422049       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0904 21:07:01.881119       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0904 21:07:01.881930       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0904 21:07:01.890364       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0904 21:07:01.890456       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0904 21:07:01.892636       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0904 21:07:01.922128       1 cache.go:39] Caches are synced for autoregister controller
	I0904 21:07:02.082703       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0904 21:07:02.418453       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0904 21:07:02.581759       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E0904 21:07:02.594670       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0904 21:07:02.930644       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0904 21:07:02.930679       1 policy_source.go:224] refreshing policies
	I0904 21:07:02.937010       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	F0904 21:07:39.781496       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [072baffd21df14a7c0b4e096845fcc1970d0d672baec85de4e941731b78ee2e2] <==
	I0904 21:07:23.674727       1 serving.go:386] Generated self-signed cert in-memory
	I0904 21:07:24.153448       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0904 21:07:24.153482       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:07:24.155323       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0904 21:07:24.155513       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0904 21:07:24.155726       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0904 21:07:24.155823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0904 21:07:34.174717       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [4adf4dd4d0c413bb083a3c08b6156aa5a6241a2060de2dfedf3d9e358ae40c0c] <==
	I0904 21:08:12.305264       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-067477-m02"
	I0904 21:08:12.305297       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-067477-m04"
	I0904 21:08:12.305445       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0904 21:08:12.338279       1 shared_informer.go:320] Caches are synced for attach detach
	I0904 21:08:12.373094       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0904 21:08:12.379500       1 shared_informer.go:320] Caches are synced for resource quota
	I0904 21:08:12.409511       1 shared_informer.go:320] Caches are synced for resource quota
	I0904 21:08:12.411079       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0904 21:08:12.411211       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-067477-m04"
	I0904 21:08:12.848262       1 shared_informer.go:320] Caches are synced for garbage collector
	I0904 21:08:12.854627       1 shared_informer.go:320] Caches are synced for garbage collector
	I0904 21:08:12.854658       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0904 21:08:18.418023       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-067477-m04"
	I0904 21:08:18.418207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-067477"
	I0904 21:08:18.446915       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-067477"
	I0904 21:08:18.522560       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="80.919214ms"
	I0904 21:08:18.522671       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.025µs"
	I0904 21:08:18.619171       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="21.55275ms"
	I0904 21:08:18.619295       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="81.91µs"
	I0904 21:08:18.682615       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-hg6cr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-hg6cr\": the object has been modified; please apply your changes to the latest version and try again"
	I0904 21:08:18.682931       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6265e008-8525-41c6-b429-ff17cb10c864", APIVersion:"v1", ResourceVersion:"246", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-hg6cr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-hg6cr": the object has been modified; please apply your changes to the latest version and try again
	I0904 21:08:22.216204       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.723822ms"
	I0904 21:08:22.216501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="76.289µs"
	I0904 21:08:22.405328       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-067477"
	I0904 21:08:23.710597       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-067477"
	
	
	==> kube-proxy [b6705b402bee6dac6b07554fa0a21cf20dbd44c16a5b6ca24e418bd899f30efc] <==
	I0904 21:07:25.816642       1 server_linux.go:66] "Using iptables proxy"
	I0904 21:07:25.939164       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0904 21:07:25.939305       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 21:07:25.986455       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 21:07:25.986644       1 server_linux.go:169] "Using iptables Proxier"
	I0904 21:07:25.991139       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 21:07:25.991587       1 server.go:483] "Version info" version="v1.31.0"
	I0904 21:07:25.992166       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 21:07:25.994088       1 config.go:197] "Starting service config controller"
	I0904 21:07:25.994223       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0904 21:07:25.994300       1 config.go:104] "Starting endpoint slice config controller"
	I0904 21:07:25.994331       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0904 21:07:25.996235       1 config.go:326] "Starting node config controller"
	I0904 21:07:25.996315       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0904 21:07:26.095279       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0904 21:07:26.095333       1 shared_informer.go:320] Caches are synced for service config
	I0904 21:07:26.096433       1 shared_informer.go:320] Caches are synced for node config
	E0904 21:07:44.104734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0904 21:07:44.104868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)" logger="UnhandledError"
	E0904 21:07:44.104932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	
	
	==> kube-scheduler [181f987c18b91abb484fd77e89478ef933039a83fc1d9adc31037576a4d8902b] <==
	W0904 21:06:56.151216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0904 21:06:56.151258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 21:06:56.239533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0904 21:06:56.239577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 21:06:56.471942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0904 21:06:56.471991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 21:06:56.473191       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0904 21:06:56.473239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0904 21:06:57.151223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0904 21:06:57.151266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0904 21:06:57.313715       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0904 21:06:57.313757       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0904 21:06:58.411352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0904 21:06:58.411403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 21:06:58.609728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0904 21:06:58.609770       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 21:06:58.895715       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0904 21:06:58.895766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 21:06:59.875483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0904 21:06:59.875534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0904 21:07:00.194200       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0904 21:07:00.194263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 21:07:01.321523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0904 21:07:01.321685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0904 21:07:22.511157       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 04 21:07:41 ha-067477 kubelet[765]: E0904 21:07:41.035876     765 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/coredns-6f6b679f8f-ltwpt.17f2269889e2ab8c\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{coredns-6f6b679f8f-ltwpt.17f2269889e2ab8c  kube-system   2811 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-6f6b679f8f-ltwpt,UID:d3ad0e27-10ec-482a-bee3-258dbfbcb87c,APIVersion:v1,ResourceVersion:2199,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 503,Source:EventSource{Component:kubelet,Host:ha-067477,},FirstTimestamp:2024-09-04 21:07:23 +0000 UTC,LastTimestamp:2024-09-04 21:07:40.798274504 +0000 UTC m=+74.463733186,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Seri
es:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-067477,}"
	Sep 04 21:07:42 ha-067477 kubelet[765]: I0904 21:07:42.079565     765 scope.go:117] "RemoveContainer" containerID="072baffd21df14a7c0b4e096845fcc1970d0d672baec85de4e941731b78ee2e2"
	Sep 04 21:07:42 ha-067477 kubelet[765]: E0904 21:07:42.079759     765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-067477_kube-system(0a435ace0f4dc6cc6594089e4b883a41)\"" pod="kube-system/kube-controller-manager-ha-067477" podUID="0a435ace0f4dc6cc6594089e4b883a41"
	Sep 04 21:07:43 ha-067477 kubelet[765]: E0904 21:07:43.987499     765 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Sep 04 21:07:43 ha-067477 kubelet[765]: E0904 21:07:43.991620     765 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Sep 04 21:07:43 ha-067477 kubelet[765]: E0904 21:07:43.991719     765 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Sep 04 21:07:43 ha-067477 kubelet[765]: E0904 21:07:43.991744     765 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	Sep 04 21:07:44 ha-067477 kubelet[765]: I0904 21:07:44.884317     765 scope.go:117] "RemoveContainer" containerID="c67b0c8f8928139dcd99729134f6e970e0a838014b39865351e9d28fcdaae08e"
	Sep 04 21:07:46 ha-067477 kubelet[765]: E0904 21:07:46.544884     765 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725484066544635611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 21:07:46 ha-067477 kubelet[765]: E0904 21:07:46.544928     765 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725484066544635611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 21:07:53 ha-067477 kubelet[765]: I0904 21:07:53.648034     765 scope.go:117] "RemoveContainer" containerID="072baffd21df14a7c0b4e096845fcc1970d0d672baec85de4e941731b78ee2e2"
	Sep 04 21:07:53 ha-067477 kubelet[765]: E0904 21:07:53.648202     765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-067477_kube-system(0a435ace0f4dc6cc6594089e4b883a41)\"" pod="kube-system/kube-controller-manager-ha-067477" podUID="0a435ace0f4dc6cc6594089e4b883a41"
	Sep 04 21:07:55 ha-067477 kubelet[765]: E0904 21:07:55.069058     765 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-067477?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 04 21:07:56 ha-067477 kubelet[765]: E0904 21:07:56.546307     765 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725484076546082737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 21:07:56 ha-067477 kubelet[765]: E0904 21:07:56.546345     765 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725484076546082737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 21:08:05 ha-067477 kubelet[765]: E0904 21:08:05.069941     765 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-067477?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 04 21:08:06 ha-067477 kubelet[765]: E0904 21:08:06.548287     765 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725484086548019204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 21:08:06 ha-067477 kubelet[765]: E0904 21:08:06.548327     765 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725484086548019204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 21:08:08 ha-067477 kubelet[765]: I0904 21:08:08.647740     765 scope.go:117] "RemoveContainer" containerID="072baffd21df14a7c0b4e096845fcc1970d0d672baec85de4e941731b78ee2e2"
	Sep 04 21:08:15 ha-067477 kubelet[765]: E0904 21:08:15.071574     765 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-067477?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 04 21:08:16 ha-067477 kubelet[765]: E0904 21:08:16.549529     765 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725484096549361139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 21:08:16 ha-067477 kubelet[765]: E0904 21:08:16.549565     765 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725484096549361139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 21:08:25 ha-067477 kubelet[765]: E0904 21:08:25.072832     765 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-067477?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 04 21:08:26 ha-067477 kubelet[765]: E0904 21:08:26.555195     765 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725484106553378208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 21:08:26 ha-067477 kubelet[765]: E0904 21:08:26.555234     765 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725484106553378208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-067477 -n ha-067477
helpers_test.go:261: (dbg) Run:  kubectl --context ha-067477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (129.64s)

                                                
                                    

Test pass (293/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.61
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.17
9 TestDownloadOnly/v1.20.0/DeleteAll 0.31
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.2
12 TestDownloadOnly/v1.31.0/json-events 7.76
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.54
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 237.55
31 TestAddons/serial/GCPAuth/Namespaces 0.25
35 TestAddons/parallel/InspektorGadget 10.81
39 TestAddons/parallel/CSI 53.47
40 TestAddons/parallel/Headlamp 16.73
41 TestAddons/parallel/CloudSpanner 6.56
42 TestAddons/parallel/LocalPath 51.42
43 TestAddons/parallel/NvidiaDevicePlugin 6.54
44 TestAddons/parallel/Yakd 11.75
45 TestAddons/StoppedEnableDisable 6.32
46 TestCertOptions 39.91
47 TestCertExpiration 240.09
49 TestForceSystemdFlag 39.59
50 TestForceSystemdEnv 35.74
56 TestErrorSpam/setup 30.37
57 TestErrorSpam/start 0.8
58 TestErrorSpam/status 1.1
59 TestErrorSpam/pause 2.33
60 TestErrorSpam/unpause 1.99
61 TestErrorSpam/stop 1.54
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 51.57
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 30.46
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.47
73 TestFunctional/serial/CacheCmd/cache/add_local 1.31
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.14
78 TestFunctional/serial/CacheCmd/cache/delete 0.13
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
81 TestFunctional/serial/ExtraConfig 36.59
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.79
84 TestFunctional/serial/LogsFileCmd 1.89
85 TestFunctional/serial/InvalidService 4.39
87 TestFunctional/parallel/ConfigCmd 0.49
88 TestFunctional/parallel/DashboardCmd 11.29
89 TestFunctional/parallel/DryRun 0.49
90 TestFunctional/parallel/InternationalLanguage 0.26
91 TestFunctional/parallel/StatusCmd 1.33
95 TestFunctional/parallel/ServiceCmdConnect 7.74
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 25.86
99 TestFunctional/parallel/SSHCmd 0.54
100 TestFunctional/parallel/CpCmd 2.01
102 TestFunctional/parallel/FileSync 0.35
103 TestFunctional/parallel/CertSync 2.2
107 TestFunctional/parallel/NodeLabels 0.15
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
112 TestFunctional/parallel/Version/short 0.09
113 TestFunctional/parallel/Version/components 0.97
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.95
119 TestFunctional/parallel/ImageCommands/Setup 0.81
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.62
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.28
125 TestFunctional/parallel/ServiceCmd/DeployApp 12.32
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.21
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.21
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.27
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.46
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.37
136 TestFunctional/parallel/ServiceCmd/List 0.32
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
139 TestFunctional/parallel/ServiceCmd/Format 0.37
140 TestFunctional/parallel/ServiceCmd/URL 0.34
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
148 TestFunctional/parallel/ProfileCmd/profile_list 0.4
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
150 TestFunctional/parallel/MountCmd/any-port 8.88
151 TestFunctional/parallel/MountCmd/specific-port 1.1
152 TestFunctional/parallel/MountCmd/VerifyCleanup 2.07
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 183.68
160 TestMultiControlPlane/serial/DeployApp 9.22
161 TestMultiControlPlane/serial/PingHostFromPods 1.62
162 TestMultiControlPlane/serial/AddWorkerNode 36.96
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.77
165 TestMultiControlPlane/serial/CopyFile 19.09
166 TestMultiControlPlane/serial/StopSecondaryNode 12.69
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.53
168 TestMultiControlPlane/serial/RestartSecondaryNode 24.3
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 5.28
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 181.05
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.64
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
173 TestMultiControlPlane/serial/StopCluster 35.83
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.57
176 TestMultiControlPlane/serial/AddSecondaryNode 77.48
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.75
181 TestJSONOutput/start/Command 48.85
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.76
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.68
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.86
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.24
206 TestKicCustomNetwork/create_custom_network 40.3
207 TestKicCustomNetwork/use_default_bridge_network 38.76
208 TestKicExistingNetwork 32.2
209 TestKicCustomSubnet 34.42
210 TestKicStaticIP 34.78
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 66.57
215 TestMountStart/serial/StartWithMountFirst 6.8
216 TestMountStart/serial/VerifyMountFirst 0.3
217 TestMountStart/serial/StartWithMountSecond 6.85
218 TestMountStart/serial/VerifyMountSecond 0.25
219 TestMountStart/serial/DeleteFirst 1.65
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 8.42
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 79.89
227 TestMultiNode/serial/DeployApp2Nodes 7.76
228 TestMultiNode/serial/PingHostFrom2Pods 1.18
229 TestMultiNode/serial/AddNode 29.69
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.33
232 TestMultiNode/serial/CopyFile 10.14
233 TestMultiNode/serial/StopNode 2.26
234 TestMultiNode/serial/StartAfterStop 9.58
235 TestMultiNode/serial/RestartKeepsNodes 101.37
236 TestMultiNode/serial/DeleteNode 5.45
237 TestMultiNode/serial/StopMultiNode 24.06
238 TestMultiNode/serial/RestartMultiNode 46.59
239 TestMultiNode/serial/ValidateNameConflict 35.25
244 TestPreload 129.56
246 TestScheduledStopUnix 109.39
249 TestInsufficientStorage 10.84
250 TestRunningBinaryUpgrade 96.13
252 TestKubernetesUpgrade 242.2
253 TestMissingContainerUpgrade 164.18
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 42.28
257 TestNoKubernetes/serial/StartWithStopK8s 22.86
258 TestNoKubernetes/serial/Start 9.55
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
260 TestNoKubernetes/serial/ProfileList 1.46
261 TestNoKubernetes/serial/Stop 1.52
262 TestNoKubernetes/serial/StartNoArgs 7.56
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
264 TestStoppedBinaryUpgrade/Setup 1.42
265 TestStoppedBinaryUpgrade/Upgrade 99.09
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.32
275 TestPause/serial/Start 59.73
276 TestPause/serial/SecondStartNoReconfiguration 42.78
284 TestNetworkPlugins/group/false 4.07
288 TestPause/serial/Pause 0.9
289 TestPause/serial/VerifyStatus 0.4
290 TestPause/serial/Unpause 0.91
291 TestPause/serial/PauseAgain 1.19
292 TestPause/serial/DeletePaused 3.24
293 TestPause/serial/VerifyDeletedResources 5.92
295 TestStartStop/group/old-k8s-version/serial/FirstStart 165.75
296 TestStartStop/group/old-k8s-version/serial/DeployApp 10.67
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.04
298 TestStartStop/group/old-k8s-version/serial/Stop 12.19
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
300 TestStartStop/group/old-k8s-version/serial/SecondStart 153.75
302 TestStartStop/group/no-preload/serial/FirstStart 74.17
303 TestStartStop/group/no-preload/serial/DeployApp 10.35
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
305 TestStartStop/group/no-preload/serial/Stop 12.1
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
307 TestStartStop/group/no-preload/serial/SecondStart 268.57
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
311 TestStartStop/group/old-k8s-version/serial/Pause 2.98
313 TestStartStop/group/embed-certs/serial/FirstStart 50.83
314 TestStartStop/group/embed-certs/serial/DeployApp 11.45
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
316 TestStartStop/group/embed-certs/serial/Stop 12.02
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
318 TestStartStop/group/embed-certs/serial/SecondStart 274.46
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
322 TestStartStop/group/no-preload/serial/Pause 3.16
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.52
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.37
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.96
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 296.24
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
333 TestStartStop/group/embed-certs/serial/Pause 3.48
335 TestStartStop/group/newest-cni/serial/FirstStart 34.86
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.3
338 TestStartStop/group/newest-cni/serial/Stop 1.29
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
340 TestStartStop/group/newest-cni/serial/SecondStart 16.67
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
344 TestStartStop/group/newest-cni/serial/Pause 3.05
345 TestNetworkPlugins/group/auto/Start 52.29
346 TestNetworkPlugins/group/auto/KubeletFlags 0.3
347 TestNetworkPlugins/group/auto/NetCatPod 10.27
348 TestNetworkPlugins/group/auto/DNS 0.18
349 TestNetworkPlugins/group/auto/Localhost 0.15
350 TestNetworkPlugins/group/auto/HairPin 0.17
351 TestNetworkPlugins/group/kindnet/Start 51.64
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
354 TestNetworkPlugins/group/kindnet/NetCatPod 12.27
355 TestNetworkPlugins/group/kindnet/DNS 0.19
356 TestNetworkPlugins/group/kindnet/Localhost 0.16
357 TestNetworkPlugins/group/kindnet/HairPin 0.19
358 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
359 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
360 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
361 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.49
362 TestNetworkPlugins/group/calico/Start 72.84
363 TestNetworkPlugins/group/custom-flannel/Start 71.87
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
366 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
367 TestNetworkPlugins/group/calico/KubeletFlags 0.41
368 TestNetworkPlugins/group/calico/NetCatPod 11.34
369 TestNetworkPlugins/group/custom-flannel/DNS 0.23
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
372 TestNetworkPlugins/group/calico/DNS 0.19
373 TestNetworkPlugins/group/calico/Localhost 0.17
374 TestNetworkPlugins/group/calico/HairPin 0.2
375 TestNetworkPlugins/group/enable-default-cni/Start 51.68
376 TestNetworkPlugins/group/flannel/Start 59.29
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.38
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
384 TestNetworkPlugins/group/flannel/NetCatPod 12.41
385 TestNetworkPlugins/group/flannel/DNS 0.22
386 TestNetworkPlugins/group/flannel/Localhost 0.22
387 TestNetworkPlugins/group/bridge/Start 80.18
388 TestNetworkPlugins/group/flannel/HairPin 0.22
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
390 TestNetworkPlugins/group/bridge/NetCatPod 11.25
391 TestNetworkPlugins/group/bridge/DNS 0.18
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (9.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-729266 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-729266 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.605335244s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-729266
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-729266: exit status 85 (165.691291ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-729266 | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |          |
	|         | -p download-only-729266        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 20:34:06
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 20:34:06.416568  715986 out.go:345] Setting OutFile to fd 1 ...
	I0904 20:34:06.416718  715986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:34:06.416731  715986 out.go:358] Setting ErrFile to fd 2...
	I0904 20:34:06.416750  715986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:34:06.417023  715986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
	W0904 20:34:06.417181  715986 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19575-710603/.minikube/config/config.json: open /home/jenkins/minikube-integration/19575-710603/.minikube/config/config.json: no such file or directory
	I0904 20:34:06.417622  715986 out.go:352] Setting JSON to true
	I0904 20:34:06.418515  715986 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15397,"bootTime":1725466650,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0904 20:34:06.418590  715986 start.go:139] virtualization:  
	I0904 20:34:06.421504  715986 out.go:97] [download-only-729266] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0904 20:34:06.421693  715986 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball: no such file or directory
	I0904 20:34:06.421751  715986 notify.go:220] Checking for updates...
	I0904 20:34:06.424322  715986 out.go:169] MINIKUBE_LOCATION=19575
	I0904 20:34:06.426351  715986 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:34:06.428208  715986 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 20:34:06.430085  715986 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	I0904 20:34:06.432236  715986 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0904 20:34:06.435871  715986 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 20:34:06.436268  715986 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 20:34:06.463816  715986 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0904 20:34:06.463923  715986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:34:06.532304  715986 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-04 20:34:06.52219274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 20:34:06.532518  715986 docker.go:307] overlay module found
	I0904 20:34:06.534281  715986 out.go:97] Using the docker driver based on user configuration
	I0904 20:34:06.534314  715986 start.go:297] selected driver: docker
	I0904 20:34:06.534323  715986 start.go:901] validating driver "docker" against <nil>
	I0904 20:34:06.534472  715986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:34:06.591609  715986 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-04 20:34:06.580952803 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 20:34:06.591786  715986 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 20:34:06.592085  715986 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0904 20:34:06.592250  715986 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 20:34:06.594292  715986 out.go:169] Using Docker driver with root privileges
	I0904 20:34:06.596121  715986 cni.go:84] Creating CNI manager for ""
	I0904 20:34:06.596149  715986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:34:06.596161  715986 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 20:34:06.596257  715986 start.go:340] cluster config:
	{Name:download-only-729266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-729266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:34:06.598173  715986 out.go:97] Starting "download-only-729266" primary control-plane node in "download-only-729266" cluster
	I0904 20:34:06.598203  715986 cache.go:121] Beginning downloading kic base image for docker with crio
	I0904 20:34:06.599845  715986 out.go:97] Pulling base image v0.0.45 ...
	I0904 20:34:06.599870  715986 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0904 20:34:06.600048  715986 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0904 20:34:06.616901  715986 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0904 20:34:06.617896  715986 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0904 20:34:06.618006  715986 image.go:148] Writing gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0904 20:34:06.686002  715986 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0904 20:34:06.686034  715986 cache.go:56] Caching tarball of preloaded images
	I0904 20:34:06.686207  715986 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0904 20:34:06.688214  715986 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0904 20:34:06.688240  715986 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0904 20:34:06.834971  715986 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-729266 host does not exist
	  To start a cluster, run: "minikube start -p download-only-729266"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-729266
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (7.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-110365 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-110365 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.756754162s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (7.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-110365
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-110365: exit status 85 (68.469196ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-729266 | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | -p download-only-729266        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| delete  | -p download-only-729266        | download-only-729266 | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC | 04 Sep 24 20:34 UTC |
	| start   | -o=json --download-only        | download-only-110365 | jenkins | v1.34.0 | 04 Sep 24 20:34 UTC |                     |
	|         | -p download-only-110365        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 20:34:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 20:34:16.696954  716189 out.go:345] Setting OutFile to fd 1 ...
	I0904 20:34:16.697170  716189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:34:16.697204  716189 out.go:358] Setting ErrFile to fd 2...
	I0904 20:34:16.697224  716189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:34:16.697483  716189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
	I0904 20:34:16.698034  716189 out.go:352] Setting JSON to true
	I0904 20:34:16.698926  716189 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15407,"bootTime":1725466650,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0904 20:34:16.699025  716189 start.go:139] virtualization:  
	I0904 20:34:16.717957  716189 out.go:97] [download-only-110365] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0904 20:34:16.718251  716189 notify.go:220] Checking for updates...
	I0904 20:34:16.744001  716189 out.go:169] MINIKUBE_LOCATION=19575
	I0904 20:34:16.766049  716189 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:34:16.797615  716189 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 20:34:16.830917  716189 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	I0904 20:34:16.862007  716189 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0904 20:34:16.925238  716189 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 20:34:16.925592  716189 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 20:34:16.946646  716189 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0904 20:34:16.946748  716189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:34:17.001005  716189 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-04 20:34:16.99150301 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 20:34:17.001128  716189 docker.go:307] overlay module found
	I0904 20:34:17.025358  716189 out.go:97] Using the docker driver based on user configuration
	I0904 20:34:17.025397  716189 start.go:297] selected driver: docker
	I0904 20:34:17.025405  716189 start.go:901] validating driver "docker" against <nil>
	I0904 20:34:17.025528  716189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:34:17.080226  716189 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-04 20:34:17.070192359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 20:34:17.080395  716189 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 20:34:17.080668  716189 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0904 20:34:17.080827  716189 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 20:34:17.115190  716189 out.go:169] Using Docker driver with root privileges
	I0904 20:34:17.147881  716189 cni.go:84] Creating CNI manager for ""
	I0904 20:34:17.147918  716189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 20:34:17.147939  716189 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 20:34:17.148022  716189 start.go:340] cluster config:
	{Name:download-only-110365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-110365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:34:17.179772  716189 out.go:97] Starting "download-only-110365" primary control-plane node in "download-only-110365" cluster
	I0904 20:34:17.179831  716189 cache.go:121] Beginning downloading kic base image for docker with crio
	I0904 20:34:17.210326  716189 out.go:97] Pulling base image v0.0.45 ...
	I0904 20:34:17.210364  716189 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 20:34:17.210438  716189 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0904 20:34:17.226377  716189 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0904 20:34:17.226501  716189 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0904 20:34:17.226533  716189 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0904 20:34:17.226543  716189 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0904 20:34:17.226552  716189 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0904 20:34:17.269092  716189 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0904 20:34:17.269122  716189 cache.go:56] Caching tarball of preloaded images
	I0904 20:34:17.269273  716189 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0904 20:34:17.294168  716189 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0904 20:34:17.294264  716189 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 ...
	I0904 20:34:17.383985  716189 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e6af375765e1700a37be5f07489fb80e -> /home/jenkins/minikube-integration/19575-710603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-110365 host does not exist
	  To start a cluster, run: "minikube start -p download-only-110365"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-110365
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-435820 --alsologtostderr --binary-mirror http://127.0.0.1:40553 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-435820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-435820
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-057989
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-057989: exit status 85 (59.833468ms)

                                                
                                                
-- stdout --
	* Profile "addons-057989" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-057989"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-057989
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-057989: exit status 85 (80.184956ms)

                                                
                                                
-- stdout --
	* Profile "addons-057989" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-057989"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (237.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-057989 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-057989 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m57.548131213s)
--- PASS: TestAddons/Setup (237.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-057989 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-057989 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nm44m" [d1e7a28e-abcd-4272-8dbe-0fec484e5c83] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005421492s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-057989
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-057989: (5.807370716s)
--- PASS: TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.325777ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-057989 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-057989 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fdbd9cad-74b2-4713-98f7-66779ae42ee0] Pending
helpers_test.go:344: "task-pv-pod" [fdbd9cad-74b2-4713-98f7-66779ae42ee0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fdbd9cad-74b2-4713-98f7-66779ae42ee0] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003513687s
addons_test.go:590: (dbg) Run:  kubectl --context addons-057989 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-057989 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-057989 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-057989 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-057989 delete pod task-pv-pod: (1.258085447s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-057989 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-057989 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-057989 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [99b1e1e1-d06b-4982-a0ed-9d72e0d0d2ce] Pending
helpers_test.go:344: "task-pv-pod-restore" [99b1e1e1-d06b-4982-a0ed-9d72e0d0d2ce] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [99b1e1e1-d06b-4982-a0ed-9d72e0d0d2ce] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003790963s
addons_test.go:632: (dbg) Run:  kubectl --context addons-057989 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-057989 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-057989 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-057989 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.84942158s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-057989 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-057989 --alsologtostderr -v=1: (1.008451094s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-g67sd" [6fd685a5-1293-4d08-ba28-1ad4e5b5dc74] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-g67sd" [6fd685a5-1293-4d08-ba28-1ad4e5b5dc74] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-g67sd" [6fd685a5-1293-4d08-ba28-1ad4e5b5dc74] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.019141789s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-057989 addons disable headlamp --alsologtostderr -v=1: (5.704062787s)
--- PASS: TestAddons/parallel/Headlamp (16.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-l4dt7" [7dc3060a-6eb3-4149-81d7-1ae43d6e76cb] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003595633s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-057989
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-057989 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-057989 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-057989 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ea581291-8725-4100-a7d5-c23d41252543] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ea581291-8725-4100-a7d5-c23d41252543] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ea581291-8725-4100-a7d5-c23d41252543] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004193946s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-057989 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 ssh "cat /opt/local-path-provisioner/pvc-40c44ff0-5aea-4b87-80a2-1fb89aeac81e_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-057989 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-057989 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-057989 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.329811708s)
--- PASS: TestAddons/parallel/LocalPath (51.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-hxn5k" [e2ce6825-b8bf-4d5a-a77f-337ca9cd2e60] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003581101s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-057989
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-7j8ss" [da16d594-5ad8-41a6-947e-35d4aa6c4187] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00535765s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-057989 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-057989 addons disable yakd --alsologtostderr -v=1: (5.74128489s)
--- PASS: TestAddons/parallel/Yakd (11.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-057989
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-057989: (6.046893051s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-057989
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-057989
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-057989
--- PASS: TestAddons/StoppedEnableDisable (6.32s)

                                                
                                    
x
+
TestCertOptions (39.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-379108 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0904 21:33:24.397979  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-379108 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.79038323s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-379108 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-379108 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-379108 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-379108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-379108
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-379108: (2.204018572s)
--- PASS: TestCertOptions (39.91s)

                                                
                                    
x
+
TestCertExpiration (240.09s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-154905 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-154905 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.326503616s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-154905 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-154905 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.07299013s)
helpers_test.go:175: Cleaning up "cert-expiration-154905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-154905
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-154905: (2.691942061s)
--- PASS: TestCertExpiration (240.09s)

                                                
                                    
x
+
TestForceSystemdFlag (39.59s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-912801 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-912801 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.856103923s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-912801 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-912801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-912801
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-912801: (2.403299249s)
--- PASS: TestForceSystemdFlag (39.59s)

                                                
                                    
x
+
TestForceSystemdEnv (35.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-603195 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-603195 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.929665315s)
helpers_test.go:175: Cleaning up "force-systemd-env-603195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-603195
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-603195: (2.812865279s)
--- PASS: TestForceSystemdEnv (35.74s)

                                                
                                    
x
+
TestErrorSpam/setup (30.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-955120 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-955120 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-955120 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-955120 --driver=docker  --container-runtime=crio: (30.373458673s)
--- PASS: TestErrorSpam/setup (30.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (2.33s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 pause
--- PASS: TestErrorSpam/pause (2.33s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 unpause
--- PASS: TestErrorSpam/unpause (1.99s)

                                                
                                    
x
+
TestErrorSpam/stop (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 stop: (1.344237316s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-955120 --log_dir /tmp/nospam-955120 stop
--- PASS: TestErrorSpam/stop (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19575-710603/.minikube/files/etc/test/nested/copy/715981/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262207 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-262207 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (51.571072233s)
--- PASS: TestFunctional/serial/StartWithProxy (51.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262207 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-262207 --alsologtostderr -v=8: (30.457339225s)
functional_test.go:663: soft start took 30.457938907s for "functional-262207" cluster.
--- PASS: TestFunctional/serial/SoftStart (30.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-262207 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-262207 cache add registry.k8s.io/pause:3.1: (1.485839359s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-262207 cache add registry.k8s.io/pause:3.3: (1.672784627s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-262207 cache add registry.k8s.io/pause:latest: (1.315814717s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-262207 /tmp/TestFunctionalserialCacheCmdcacheadd_local2808244380/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 cache add minikube-local-cache-test:functional-262207
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 cache delete minikube-local-cache-test:functional-262207
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-262207
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262207 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (307.056002ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-262207 cache reload: (1.22948847s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 kubectl -- --context functional-262207 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-262207 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262207 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-262207 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.586075702s)
functional_test.go:761: restart took 36.586183367s for "functional-262207" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-262207 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-262207 logs: (1.786335627s)
--- PASS: TestFunctional/serial/LogsCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 logs --file /tmp/TestFunctionalserialLogsFileCmd657115251/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-262207 logs --file /tmp/TestFunctionalserialLogsFileCmd657115251/001/logs.txt: (1.891781223s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.89s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-262207 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-262207
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-262207: exit status 115 (476.037347ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31211 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-262207 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262207 config get cpus: exit status 14 (85.069928ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262207 config get cpus: exit status 14 (72.812028ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-262207 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-262207 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 746746: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262207 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-262207 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (253.423504ms)

                                                
                                                
-- stdout --
	* [functional-262207] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 20:57:19.941138  746171 out.go:345] Setting OutFile to fd 1 ...
	I0904 20:57:19.941344  746171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:57:19.941356  746171 out.go:358] Setting ErrFile to fd 2...
	I0904 20:57:19.941362  746171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:57:19.941617  746171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
	I0904 20:57:19.942044  746171 out.go:352] Setting JSON to false
	I0904 20:57:19.942986  746171 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16790,"bootTime":1725466650,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0904 20:57:19.943069  746171 start.go:139] virtualization:  
	I0904 20:57:19.945687  746171 out.go:177] * [functional-262207] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0904 20:57:19.947693  746171 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 20:57:19.947727  746171 notify.go:220] Checking for updates...
	I0904 20:57:19.951162  746171 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:57:19.953120  746171 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 20:57:19.954900  746171 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	I0904 20:57:19.956472  746171 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0904 20:57:19.958030  746171 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 20:57:19.960146  746171 config.go:182] Loaded profile config "functional-262207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 20:57:19.960688  746171 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 20:57:19.983164  746171 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0904 20:57:19.983285  746171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:57:20.132446  746171 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-04 20:57:20.121805219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 20:57:20.132605  746171 docker.go:307] overlay module found
	I0904 20:57:20.135335  746171 out.go:177] * Using the docker driver based on existing profile
	I0904 20:57:20.137225  746171 start.go:297] selected driver: docker
	I0904 20:57:20.137260  746171 start.go:901] validating driver "docker" against &{Name:functional-262207 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-262207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:57:20.137409  746171 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 20:57:20.139815  746171 out.go:201] 
	W0904 20:57:20.141499  746171 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0904 20:57:20.143337  746171 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262207 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262207 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-262207 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (259.946218ms)

                                                
                                                
-- stdout --
	* [functional-262207] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 20:57:20.438206  746287 out.go:345] Setting OutFile to fd 1 ...
	I0904 20:57:20.438430  746287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:57:20.438457  746287 out.go:358] Setting ErrFile to fd 2...
	I0904 20:57:20.438480  746287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:57:20.438869  746287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
	I0904 20:57:20.439360  746287 out.go:352] Setting JSON to false
	I0904 20:57:20.440386  746287 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16791,"bootTime":1725466650,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0904 20:57:20.440498  746287 start.go:139] virtualization:  
	I0904 20:57:20.444699  746287 out.go:177] * [functional-262207] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0904 20:57:20.446863  746287 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 20:57:20.446950  746287 notify.go:220] Checking for updates...
	I0904 20:57:20.453363  746287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 20:57:20.455240  746287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 20:57:20.457030  746287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	I0904 20:57:20.458748  746287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0904 20:57:20.460376  746287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 20:57:20.462893  746287 config.go:182] Loaded profile config "functional-262207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 20:57:20.463512  746287 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 20:57:20.504440  746287 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0904 20:57:20.504560  746287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 20:57:20.629419  746287 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-04 20:57:20.619347847 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 20:57:20.629535  746287 docker.go:307] overlay module found
	I0904 20:57:20.631718  746287 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0904 20:57:20.633519  746287 start.go:297] selected driver: docker
	I0904 20:57:20.633542  746287 start.go:901] validating driver "docker" against &{Name:functional-262207 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-262207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 20:57:20.633676  746287 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 20:57:20.636254  746287 out.go:201] 
	W0904 20:57:20.637949  746287 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0904 20:57:20.639565  746287 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-262207 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-262207 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-dzhfw" [89efde45-f325-43a2-8db2-fb033cf872fc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-dzhfw" [89efde45-f325-43a2-8db2-fb033cf872fc] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.00514983s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30628
functional_test.go:1675: http://192.168.49.2:30628: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-dzhfw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30628
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [af6ef3ac-657b-42bd-ac87-3e001b9c0ac8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004478796s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-262207 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-262207 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-262207 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-262207 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e1075eb0-1af7-4df0-9035-eb1f1b4f9911] Pending
helpers_test.go:344: "sp-pod" [e1075eb0-1af7-4df0-9035-eb1f1b4f9911] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e1075eb0-1af7-4df0-9035-eb1f1b4f9911] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.005505684s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-262207 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-262207 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-262207 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1a0e7b65-8966-416e-bc0b-86f9502c89ab] Pending
helpers_test.go:344: "sp-pod" [1a0e7b65-8966-416e-bc0b-86f9502c89ab] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.005194198s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-262207 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.86s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh -n functional-262207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 cp functional-262207:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3487862407/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh -n functional-262207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh -n functional-262207 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/715981/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "sudo cat /etc/test/nested/copy/715981/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/715981.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "sudo cat /etc/ssl/certs/715981.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/715981.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "sudo cat /usr/share/ca-certificates/715981.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7159812.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "sudo cat /etc/ssl/certs/7159812.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7159812.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "sudo cat /usr/share/ca-certificates/7159812.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-262207 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262207 ssh "sudo systemctl is-active docker": exit status 1 (363.503537ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262207 ssh "sudo systemctl is-active containerd": exit status 1 (326.689587ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262207 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-262207
localhost/kicbase/echo-server:functional-262207
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262207 image ls --format short --alsologtostderr:
I0904 20:57:22.469446  746708 out.go:345] Setting OutFile to fd 1 ...
I0904 20:57:22.469594  746708 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 20:57:22.469603  746708 out.go:358] Setting ErrFile to fd 2...
I0904 20:57:22.469608  746708 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 20:57:22.469877  746708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
I0904 20:57:22.470569  746708 config.go:182] Loaded profile config "functional-262207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0904 20:57:22.470683  746708 config.go:182] Loaded profile config "functional-262207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0904 20:57:22.471155  746708 cli_runner.go:164] Run: docker container inspect functional-262207 --format={{.State.Status}}
I0904 20:57:22.511329  746708 ssh_runner.go:195] Run: systemctl --version
I0904 20:57:22.511395  746708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262207
I0904 20:57:22.537874  746708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/functional-262207/id_rsa Username:docker}
I0904 20:57:22.627224  746708 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262207 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | 71d55d66fd4ee | 95.9MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | 70594c812316a | 48.4MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| localhost/kicbase/echo-server           | functional-262207  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | cd0f0ae0ec9e0 | 92.6MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | d5e283bc63d43 | 90.3MB |
| docker.io/library/nginx                 | latest             | a9dfdba8b7190 | 197MB  |
| localhost/my-image                      | functional-262207  | f51f3dc8480d8 | 1.64MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-controller-manager | v1.31.0            | fcb0683e6bdbd | 86.9MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | fbbbd428abb4d | 67MB   |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/minikube-local-cache-test     | functional-262207  | 7f4e9df5164fb | 3.33kB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262207 image ls --format table --alsologtostderr:
I0904 20:57:27.358646  747105 out.go:345] Setting OutFile to fd 1 ...
I0904 20:57:27.358837  747105 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 20:57:27.358859  747105 out.go:358] Setting ErrFile to fd 2...
I0904 20:57:27.358880  747105 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 20:57:27.359128  747105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
I0904 20:57:27.359891  747105 config.go:182] Loaded profile config "functional-262207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0904 20:57:27.360060  747105 config.go:182] Loaded profile config "functional-262207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0904 20:57:27.360626  747105 cli_runner.go:164] Run: docker container inspect functional-262207 --format={{.State.Status}}
I0904 20:57:27.395629  747105 ssh_runner.go:195] Run: systemctl --version
I0904 20:57:27.395683  747105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262207
I0904 20:57:27.419399  747105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/functional-262207/id_rsa Username:docker}
I0904 20:57:27.514985  747105 ssh_runner.go:195] Run: sudo crictl images --output json
2024/09/04 20:57:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262207 image ls --format json --alsologtostderr:
[{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-262207"
],"size":"4788229"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:ba188f5
79f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6","docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48397013"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172049"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c503580b5ac898f3f40329fc98f6a1f0ea971","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"95949719"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["r
egistry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"f0db0ad92ba343a088bad7e701093bdcdc2d284a2a7add3f49e23995a9b0d1df","repoDigests":["docker.io/library/3b78ade729dfaa375012dc568701480d4371f803a5b1b71538c9b5a76c16ab1b-tmp@sha256:56d6b6f9a557fa9eb38e041e1c5dd7d07d7821282a93e4685d9f79fe26b6ef86"],"repoTags":[],"size":"1637643"},{"id":"7f4e9df5164fb76cce6855955a45ea120d303c47a29ff51865142362abb403dd","repoDigests":["localhost/minikube-local-cache-test@sha256:04c20cd662acf54153e1c88e3b00b9dbf9fc7ca01b60ffa5bfd11fdd868a46cd"],"repoTags":["localhost/minikube-local-cache-tes
t:functional-262207"],"size":"3330"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"86930758"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"f51f3dc8480d8d8284df043d1671f0d613a2dbbe6232e3cee9390641d8610c06","repoDigests":["localhost/my-image@sha256:32d8ef717b760a439d4a81ba74cbaa546060154fa2c204bd4cb3f11f6da606b1"],"repoTags":["localhost/my-image:functional-26
2207"],"size":"1640226"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087f
fd01fd9fe1cb39c6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"92567005"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808","registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a00936681095842a0d813c70ecc2d4f65f3bd3beef77"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67007814"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e"],"repoTags":["docker.io/kin
dest/kindnetd:v20240730-75a5af0c"],"size":"90290738"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262207 image ls --format json --alsologtostderr:
I0904 20:57:27.045567  747074 out.go:345] Setting OutFile to fd 1 ...
I0904 20:57:27.045705  747074 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 20:57:27.045710  747074 out.go:358] Setting ErrFile to fd 2...
I0904 20:57:27.045715  747074 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 20:57:27.046042  747074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
I0904 20:57:27.046793  747074 config.go:182] Loaded profile config "functional-262207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0904 20:57:27.047176  747074 config.go:182] Loaded profile config "functional-262207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0904 20:57:27.047767  747074 cli_runner.go:164] Run: docker container inspect functional-262207 --format={{.State.Status}}
I0904 20:57:27.069527  747074 ssh_runner.go:195] Run: systemctl --version
I0904 20:57:27.069588  747074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262207
I0904 20:57:27.089048  747074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/functional-262207/id_rsa Username:docker}
I0904 20:57:27.217498  747074 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262207 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "86930758"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c503580b5ac898f3f40329fc98f6a1f0ea971
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "95949719"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "90290738"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087ffd01fd9fe1cb39c6
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "92567005"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "48397013"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55
repoTags:
- docker.io/library/nginx:latest
size: "197172049"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
- registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a00936681095842a0d813c70ecc2d4f65f3bd3beef77
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67007814"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-262207
size: "4788229"
- id: 7f4e9df5164fb76cce6855955a45ea120d303c47a29ff51865142362abb403dd
repoDigests:
- localhost/minikube-local-cache-test@sha256:04c20cd662acf54153e1c88e3b00b9dbf9fc7ca01b60ffa5bfd11fdd868a46cd
repoTags:
- localhost/minikube-local-cache-test:functional-262207
size: "3330"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262207 image ls --format yaml --alsologtostderr:
I0904 20:57:22.832174  746750 out.go:345] Setting OutFile to fd 1 ...
I0904 20:57:22.832445  746750 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 20:57:22.832477  746750 out.go:358] Setting ErrFile to fd 2...
I0904 20:57:22.832498  746750 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 20:57:22.832769  746750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
I0904 20:57:22.834022  746750 config.go:182] Loaded profile config "functional-262207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0904 20:57:22.834226  746750 config.go:182] Loaded profile config "functional-262207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0904 20:57:22.834745  746750 cli_runner.go:164] Run: docker container inspect functional-262207 --format={{.State.Status}}
I0904 20:57:22.855694  746750 ssh_runner.go:195] Run: systemctl --version
I0904 20:57:22.855763  746750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262207
I0904 20:57:22.882741  746750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/functional-262207/id_rsa Username:docker}
I0904 20:57:22.972615  746750 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262207 ssh pgrep buildkitd: exit status 1 (249.168358ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image build -t localhost/my-image:functional-262207 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-262207 image build -t localhost/my-image:functional-262207 testdata/build --alsologtostderr: (3.345543374s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262207 image build -t localhost/my-image:functional-262207 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f0db0ad92ba
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-262207
--> f51f3dc8480
Successfully tagged localhost/my-image:functional-262207
f51f3dc8480d8d8284df043d1671f0d613a2dbbe6232e3cee9390641d8610c06
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262207 image build -t localhost/my-image:functional-262207 testdata/build --alsologtostderr:
I0904 20:57:23.322773  746839 out.go:345] Setting OutFile to fd 1 ...
I0904 20:57:23.323468  746839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 20:57:23.323484  746839 out.go:358] Setting ErrFile to fd 2...
I0904 20:57:23.323491  746839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 20:57:23.323756  746839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
I0904 20:57:23.324408  746839 config.go:182] Loaded profile config "functional-262207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0904 20:57:23.325528  746839 config.go:182] Loaded profile config "functional-262207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0904 20:57:23.326077  746839 cli_runner.go:164] Run: docker container inspect functional-262207 --format={{.State.Status}}
I0904 20:57:23.344534  746839 ssh_runner.go:195] Run: systemctl --version
I0904 20:57:23.344592  746839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262207
I0904 20:57:23.364553  746839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/functional-262207/id_rsa Username:docker}
I0904 20:57:23.454876  746839 build_images.go:161] Building image from path: /tmp/build.1316261545.tar
I0904 20:57:23.454959  746839 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0904 20:57:23.464964  746839 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1316261545.tar
I0904 20:57:23.468775  746839 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1316261545.tar: stat -c "%s %y" /var/lib/minikube/build/build.1316261545.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1316261545.tar': No such file or directory
I0904 20:57:23.468806  746839 ssh_runner.go:362] scp /tmp/build.1316261545.tar --> /var/lib/minikube/build/build.1316261545.tar (3072 bytes)
I0904 20:57:23.497422  746839 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1316261545
I0904 20:57:23.506734  746839 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1316261545 -xf /var/lib/minikube/build/build.1316261545.tar
I0904 20:57:23.516056  746839 crio.go:315] Building image: /var/lib/minikube/build/build.1316261545
I0904 20:57:23.516134  746839 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-262207 /var/lib/minikube/build/build.1316261545 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0904 20:57:26.580273  746839 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-262207 /var/lib/minikube/build/build.1316261545 --cgroup-manager=cgroupfs: (3.064104545s)
I0904 20:57:26.580362  746839 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1316261545
I0904 20:57:26.590052  746839 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1316261545.tar
I0904 20:57:26.611257  746839 build_images.go:217] Built localhost/my-image:functional-262207 from /tmp/build.1316261545.tar
I0904 20:57:26.611285  746839 build_images.go:133] succeeded building to: functional-262207
I0904 20:57:26.611290  746839 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-262207
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image load --daemon kicbase/echo-server:functional-262207 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-262207 image load --daemon kicbase/echo-server:functional-262207 --alsologtostderr: (1.341847393s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image load --daemon kicbase/echo-server:functional-262207 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-262207 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-262207 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-bptt4" [541f5f13-569d-40d7-8505-11b152e01c4e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-bptt4" [541f5f13-569d-40d7-8505-11b152e01c4e] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.00539366s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-262207
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image load --daemon kicbase/echo-server:functional-262207 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image save kicbase/echo-server:functional-262207 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image rm kicbase/echo-server:functional-262207 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-262207
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 image save --daemon kicbase/echo-server:functional-262207 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-arm64 -p functional-262207 image save --daemon kicbase/echo-server:functional-262207 --alsologtostderr: (2.215737828s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-262207
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-262207 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-262207 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-262207 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 743368: os: process already finished
helpers_test.go:502: unable to terminate pid 743250: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-262207 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-262207 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-262207 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [51c2c7d5-f49f-4d85-84b8-636ca397114d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [51c2c7d5-f49f-4d85-84b8-636ca397114d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003928543s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 service list -o json
functional_test.go:1494: Took "358.112056ms" to run "out/minikube-linux-arm64 -p functional-262207 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30781
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30781
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-262207 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.14.73 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-262207 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "345.449552ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "56.708208ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "325.513766ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "50.256042ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262207 /tmp/TestFunctionalparallelMountCmdany-port2515906917/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725483427862208295" to /tmp/TestFunctionalparallelMountCmdany-port2515906917/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725483427862208295" to /tmp/TestFunctionalparallelMountCmdany-port2515906917/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725483427862208295" to /tmp/TestFunctionalparallelMountCmdany-port2515906917/001/test-1725483427862208295
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262207 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (399.676862ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  4 20:57 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  4 20:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  4 20:57 test-1725483427862208295
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh cat /mount-9p/test-1725483427862208295
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-262207 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [68daaf2f-b403-4bfb-9611-440b14120dd5] Pending
helpers_test.go:344: "busybox-mount" [68daaf2f-b403-4bfb-9611-440b14120dd5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [68daaf2f-b403-4bfb-9611-440b14120dd5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [68daaf2f-b403-4bfb-9611-440b14120dd5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004365509s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-262207 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262207 /tmp/TestFunctionalparallelMountCmdany-port2515906917/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262207 /tmp/TestFunctionalparallelMountCmdspecific-port630647091/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262207 /tmp/TestFunctionalparallelMountCmdspecific-port630647091/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262207 ssh "sudo umount -f /mount-9p": exit status 1 (255.979107ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-262207 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262207 /tmp/TestFunctionalparallelMountCmdspecific-port630647091/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262207 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1620396584/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262207 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1620396584/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262207 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1620396584/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262207 ssh "findmnt -T" /mount1: exit status 1 (534.368273ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262207 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-262207 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262207 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1620396584/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262207 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1620396584/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262207 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1620396584/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-262207
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-262207
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-262207
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (183.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-067477 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0904 20:58:24.398943  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:58:24.405752  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:58:24.418051  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:58:24.439422  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:58:24.480796  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:58:24.562166  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:58:24.723620  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:58:25.045202  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:58:25.687173  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:58:26.968485  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:58:29.530655  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:58:34.652630  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:58:44.894329  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:59:05.375690  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:59:46.337121  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-067477 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (3m2.862824196s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (183.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-067477 -- rollout status deployment/busybox: (5.915730798s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-nzkfq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-pm8jj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-qfnx7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-nzkfq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-pm8jj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-qfnx7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-nzkfq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-pm8jj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-qfnx7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-nzkfq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-nzkfq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-pm8jj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-pm8jj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-qfnx7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-067477 -- exec busybox-7dff88458-qfnx7 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-067477 -v=7 --alsologtostderr
E0904 21:01:08.261599  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-067477 -v=7 --alsologtostderr: (35.957142375s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-067477 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp testdata/cp-test.txt ha-067477:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1674057591/001/cp-test_ha-067477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477:/home/docker/cp-test.txt ha-067477-m02:/home/docker/cp-test_ha-067477_ha-067477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m02 "sudo cat /home/docker/cp-test_ha-067477_ha-067477-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477:/home/docker/cp-test.txt ha-067477-m03:/home/docker/cp-test_ha-067477_ha-067477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m03 "sudo cat /home/docker/cp-test_ha-067477_ha-067477-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477:/home/docker/cp-test.txt ha-067477-m04:/home/docker/cp-test_ha-067477_ha-067477-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m04 "sudo cat /home/docker/cp-test_ha-067477_ha-067477-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp testdata/cp-test.txt ha-067477-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1674057591/001/cp-test_ha-067477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477-m02:/home/docker/cp-test.txt ha-067477:/home/docker/cp-test_ha-067477-m02_ha-067477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477 "sudo cat /home/docker/cp-test_ha-067477-m02_ha-067477.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477-m02:/home/docker/cp-test.txt ha-067477-m03:/home/docker/cp-test_ha-067477-m02_ha-067477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m03 "sudo cat /home/docker/cp-test_ha-067477-m02_ha-067477-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477-m02:/home/docker/cp-test.txt ha-067477-m04:/home/docker/cp-test_ha-067477-m02_ha-067477-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m04 "sudo cat /home/docker/cp-test_ha-067477-m02_ha-067477-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp testdata/cp-test.txt ha-067477-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1674057591/001/cp-test_ha-067477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477-m03:/home/docker/cp-test.txt ha-067477:/home/docker/cp-test_ha-067477-m03_ha-067477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477 "sudo cat /home/docker/cp-test_ha-067477-m03_ha-067477.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477-m03:/home/docker/cp-test.txt ha-067477-m02:/home/docker/cp-test_ha-067477-m03_ha-067477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m02 "sudo cat /home/docker/cp-test_ha-067477-m03_ha-067477-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477-m03:/home/docker/cp-test.txt ha-067477-m04:/home/docker/cp-test_ha-067477-m03_ha-067477-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m03 "sudo cat /home/docker/cp-test.txt"
E0904 21:01:41.018440  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:01:41.025252  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:01:41.039005  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:01:41.061101  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:01:41.103044  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:01:41.184852  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m04 "sudo cat /home/docker/cp-test_ha-067477-m03_ha-067477-m04.txt"
E0904 21:01:41.347123  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp testdata/cp-test.txt ha-067477-m04:/home/docker/cp-test.txt
E0904 21:01:41.668884  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1674057591/001/cp-test_ha-067477-m04.txt
E0904 21:01:42.311621  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477-m04:/home/docker/cp-test.txt ha-067477:/home/docker/cp-test_ha-067477-m04_ha-067477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m04 "sudo cat /home/docker/cp-test.txt"
E0904 21:01:43.593081  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477 "sudo cat /home/docker/cp-test_ha-067477-m04_ha-067477.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477-m04:/home/docker/cp-test.txt ha-067477-m02:/home/docker/cp-test_ha-067477-m04_ha-067477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m02 "sudo cat /home/docker/cp-test_ha-067477-m04_ha-067477-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 cp ha-067477-m04:/home/docker/cp-test.txt ha-067477-m03:/home/docker/cp-test_ha-067477-m04_ha-067477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 ssh -n ha-067477-m03 "sudo cat /home/docker/cp-test_ha-067477-m04_ha-067477-m03.txt"
E0904 21:01:46.154412  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 node stop m02 -v=7 --alsologtostderr
E0904 21:01:51.275738  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-067477 node stop m02 -v=7 --alsologtostderr: (11.952452198s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-067477 status -v=7 --alsologtostderr: exit status 7 (736.870989ms)

                                                
                                                
-- stdout --
	ha-067477
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-067477-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-067477-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-067477-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:01:58.246012  762939 out.go:345] Setting OutFile to fd 1 ...
	I0904 21:01:58.246188  762939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 21:01:58.246204  762939 out.go:358] Setting ErrFile to fd 2...
	I0904 21:01:58.246210  762939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 21:01:58.246500  762939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
	I0904 21:01:58.246728  762939 out.go:352] Setting JSON to false
	I0904 21:01:58.246786  762939 mustload.go:65] Loading cluster: ha-067477
	I0904 21:01:58.246882  762939 notify.go:220] Checking for updates...
	I0904 21:01:58.247288  762939 config.go:182] Loaded profile config "ha-067477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:01:58.247310  762939 status.go:255] checking status of ha-067477 ...
	I0904 21:01:58.248127  762939 cli_runner.go:164] Run: docker container inspect ha-067477 --format={{.State.Status}}
	I0904 21:01:58.267124  762939 status.go:330] ha-067477 host status = "Running" (err=<nil>)
	I0904 21:01:58.267150  762939 host.go:66] Checking if "ha-067477" exists ...
	I0904 21:01:58.267519  762939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-067477
	I0904 21:01:58.291130  762939 host.go:66] Checking if "ha-067477" exists ...
	I0904 21:01:58.291510  762939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:01:58.291562  762939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477
	I0904 21:01:58.324225  762939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477/id_rsa Username:docker}
	I0904 21:01:58.411437  762939 ssh_runner.go:195] Run: systemctl --version
	I0904 21:01:58.415899  762939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:01:58.429373  762939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:01:58.487102  762939 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-04 21:01:58.476646602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 21:01:58.487713  762939 kubeconfig.go:125] found "ha-067477" server: "https://192.168.49.254:8443"
	I0904 21:01:58.487750  762939 api_server.go:166] Checking apiserver status ...
	I0904 21:01:58.487803  762939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:01:58.499827  762939 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	I0904 21:01:58.509531  762939 api_server.go:182] apiserver freezer: "7:freezer:/docker/0afbbbd41dcc0411cc9046cbf16dbbc1d819f583ce6b77f2cade079d3bc44056/crio/crio-2c277be602ada3c5202b1564c751cbb763a9b61e581c7abbc1c3a98844392c39"
	I0904 21:01:58.509609  762939 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0afbbbd41dcc0411cc9046cbf16dbbc1d819f583ce6b77f2cade079d3bc44056/crio/crio-2c277be602ada3c5202b1564c751cbb763a9b61e581c7abbc1c3a98844392c39/freezer.state
	I0904 21:01:58.519233  762939 api_server.go:204] freezer state: "THAWED"
	I0904 21:01:58.519260  762939 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0904 21:01:58.527375  762939 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0904 21:01:58.527442  762939 status.go:422] ha-067477 apiserver status = Running (err=<nil>)
	I0904 21:01:58.527460  762939 status.go:257] ha-067477 status: &{Name:ha-067477 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:01:58.527478  762939 status.go:255] checking status of ha-067477-m02 ...
	I0904 21:01:58.527796  762939 cli_runner.go:164] Run: docker container inspect ha-067477-m02 --format={{.State.Status}}
	I0904 21:01:58.548112  762939 status.go:330] ha-067477-m02 host status = "Stopped" (err=<nil>)
	I0904 21:01:58.548145  762939 status.go:343] host is not running, skipping remaining checks
	I0904 21:01:58.548152  762939 status.go:257] ha-067477-m02 status: &{Name:ha-067477-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:01:58.548190  762939 status.go:255] checking status of ha-067477-m03 ...
	I0904 21:01:58.548507  762939 cli_runner.go:164] Run: docker container inspect ha-067477-m03 --format={{.State.Status}}
	I0904 21:01:58.567879  762939 status.go:330] ha-067477-m03 host status = "Running" (err=<nil>)
	I0904 21:01:58.567905  762939 host.go:66] Checking if "ha-067477-m03" exists ...
	I0904 21:01:58.568221  762939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-067477-m03
	I0904 21:01:58.586570  762939 host.go:66] Checking if "ha-067477-m03" exists ...
	I0904 21:01:58.586939  762939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:01:58.586995  762939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m03
	I0904 21:01:58.605370  762939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33553 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477-m03/id_rsa Username:docker}
	I0904 21:01:58.695544  762939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:01:58.708137  762939 kubeconfig.go:125] found "ha-067477" server: "https://192.168.49.254:8443"
	I0904 21:01:58.708169  762939 api_server.go:166] Checking apiserver status ...
	I0904 21:01:58.708238  762939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:01:58.720235  762939 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1348/cgroup
	I0904 21:01:58.730498  762939 api_server.go:182] apiserver freezer: "7:freezer:/docker/8239dc5f298c81bd478a042edc752a534952ca0a5a82d807d81b631cffcf4639/crio/crio-e5227ddadfa23e59f240b6ca6fc9aaa23f7e53b70ab11bc12129d4a0a6f1be8b"
	I0904 21:01:58.730589  762939 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8239dc5f298c81bd478a042edc752a534952ca0a5a82d807d81b631cffcf4639/crio/crio-e5227ddadfa23e59f240b6ca6fc9aaa23f7e53b70ab11bc12129d4a0a6f1be8b/freezer.state
	I0904 21:01:58.739456  762939 api_server.go:204] freezer state: "THAWED"
	I0904 21:01:58.739498  762939 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0904 21:01:58.747231  762939 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0904 21:01:58.747261  762939 status.go:422] ha-067477-m03 apiserver status = Running (err=<nil>)
	I0904 21:01:58.747272  762939 status.go:257] ha-067477-m03 status: &{Name:ha-067477-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:01:58.747320  762939 status.go:255] checking status of ha-067477-m04 ...
	I0904 21:01:58.747665  762939 cli_runner.go:164] Run: docker container inspect ha-067477-m04 --format={{.State.Status}}
	I0904 21:01:58.781453  762939 status.go:330] ha-067477-m04 host status = "Running" (err=<nil>)
	I0904 21:01:58.781482  762939 host.go:66] Checking if "ha-067477-m04" exists ...
	I0904 21:01:58.781788  762939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-067477-m04
	I0904 21:01:58.802530  762939 host.go:66] Checking if "ha-067477-m04" exists ...
	I0904 21:01:58.802877  762939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:01:58.802926  762939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-067477-m04
	I0904 21:01:58.821924  762939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/ha-067477-m04/id_rsa Username:docker}
	I0904 21:01:58.906929  762939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:01:58.918713  762939 status.go:257] ha-067477-m04 status: &{Name:ha-067477-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (24.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 node start m02 -v=7 --alsologtostderr
E0904 21:02:01.517116  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:02:21.998393  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-067477 node start m02 -v=7 --alsologtostderr: (22.795274209s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-067477 status -v=7 --alsologtostderr: (1.347550067s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (24.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (5.27864949s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (181.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-067477 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-067477 -v=7 --alsologtostderr
E0904 21:03:02.959802  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-067477 -v=7 --alsologtostderr: (37.15898419s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-067477 --wait=true -v=7 --alsologtostderr
E0904 21:03:24.397997  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:03:52.103602  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:04:24.881924  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-067477 --wait=true -v=7 --alsologtostderr: (2m23.732813926s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-067477
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (181.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-067477 node delete m03 -v=7 --alsologtostderr: (11.621756956s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-067477 stop -v=7 --alsologtostderr: (35.707449205s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-067477 status -v=7 --alsologtostderr: exit status 7 (125.695228ms)

                                                
                                                
-- stdout --
	ha-067477
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-067477-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-067477-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:06:19.006230  776854 out.go:345] Setting OutFile to fd 1 ...
	I0904 21:06:19.006499  776854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 21:06:19.006515  776854 out.go:358] Setting ErrFile to fd 2...
	I0904 21:06:19.006521  776854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 21:06:19.006901  776854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
	I0904 21:06:19.007238  776854 out.go:352] Setting JSON to false
	I0904 21:06:19.007288  776854 mustload.go:65] Loading cluster: ha-067477
	I0904 21:06:19.007423  776854 notify.go:220] Checking for updates...
	I0904 21:06:19.008560  776854 config.go:182] Loaded profile config "ha-067477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:06:19.008591  776854 status.go:255] checking status of ha-067477 ...
	I0904 21:06:19.009261  776854 cli_runner.go:164] Run: docker container inspect ha-067477 --format={{.State.Status}}
	I0904 21:06:19.028980  776854 status.go:330] ha-067477 host status = "Stopped" (err=<nil>)
	I0904 21:06:19.029007  776854 status.go:343] host is not running, skipping remaining checks
	I0904 21:06:19.029015  776854 status.go:257] ha-067477 status: &{Name:ha-067477 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:06:19.029046  776854 status.go:255] checking status of ha-067477-m02 ...
	I0904 21:06:19.029369  776854 cli_runner.go:164] Run: docker container inspect ha-067477-m02 --format={{.State.Status}}
	I0904 21:06:19.056703  776854 status.go:330] ha-067477-m02 host status = "Stopped" (err=<nil>)
	I0904 21:06:19.056724  776854 status.go:343] host is not running, skipping remaining checks
	I0904 21:06:19.056731  776854 status.go:257] ha-067477-m02 status: &{Name:ha-067477-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:06:19.056748  776854 status.go:255] checking status of ha-067477-m04 ...
	I0904 21:06:19.057027  776854 cli_runner.go:164] Run: docker container inspect ha-067477-m04 --format={{.State.Status}}
	I0904 21:06:19.080630  776854 status.go:330] ha-067477-m04 host status = "Stopped" (err=<nil>)
	I0904 21:06:19.080656  776854 status.go:343] host is not running, skipping remaining checks
	I0904 21:06:19.080663  776854 status.go:257] ha-067477-m04 status: &{Name:ha-067477-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-067477 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-067477 --control-plane -v=7 --alsologtostderr: (1m16.471121613s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-067477 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-067477 status -v=7 --alsologtostderr: (1.004458s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-410297 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-410297 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (48.850237817s)
--- PASS: TestJSONOutput/start/Command (48.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-410297 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-410297 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-410297 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-410297 --output=json --user=testUser: (5.854965996s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-705358 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-705358 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (99.047648ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1ba7763d-a62a-4a73-adf9-f1c1853ec3bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-705358] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"23d676d8-f637-4f80-85b4-781bc43879b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19575"}}
	{"specversion":"1.0","id":"e7792961-b3f5-4142-b0ab-ec5efb1a7b3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d37549fe-dcb8-4147-88bb-d0d5f5b28540","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig"}}
	{"specversion":"1.0","id":"591375e0-3d86-4dc9-addd-8aaaf3d50fe7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube"}}
	{"specversion":"1.0","id":"a752ba8a-2505-4817-9a12-d860f8b24988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5a431b3a-6d11-47db-89f7-e36c94fe114a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"865fa6f3-d814-4443-b121-6d864b2c3bc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-705358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-705358
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-325035 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-325035 --network=: (38.168082899s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-325035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-325035
E0904 21:11:41.018311  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-325035: (2.104014618s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.30s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.76s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-697401 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-697401 --network=bridge: (36.673368553s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-697401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-697401
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-697401: (2.058530138s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.76s)

                                                
                                    
x
+
TestKicExistingNetwork (32.2s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-877567 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-877567 --network=existing-network: (30.043945605s)
helpers_test.go:175: Cleaning up "existing-network-877567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-877567
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-877567: (1.997807979s)
--- PASS: TestKicExistingNetwork (32.20s)

                                                
                                    
x
+
TestKicCustomSubnet (34.42s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-199813 --subnet=192.168.60.0/24
E0904 21:13:24.398459  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-199813 --subnet=192.168.60.0/24: (32.235617225s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-199813 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-199813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-199813
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-199813: (2.157045423s)
--- PASS: TestKicCustomSubnet (34.42s)

                                                
                                    
x
+
TestKicStaticIP (34.78s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-746838 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-746838 --static-ip=192.168.200.200: (32.516500739s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-746838 ip
helpers_test.go:175: Cleaning up "static-ip-746838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-746838
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-746838: (2.094752498s)
--- PASS: TestKicStaticIP (34.78s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (66.57s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-632858 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-632858 --driver=docker  --container-runtime=crio: (30.914032643s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-635401 --driver=docker  --container-runtime=crio
E0904 21:14:47.464955  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-635401 --driver=docker  --container-runtime=crio: (29.981185596s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-632858
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-635401
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-635401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-635401
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-635401: (2.050116067s)
helpers_test.go:175: Cleaning up "first-632858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-632858
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-632858: (2.286461947s)
--- PASS: TestMinikubeProfile (66.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-573892 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-573892 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.799809317s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-573892 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-587438 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-587438 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.848754574s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-587438 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-573892 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-573892 --alsologtostderr -v=5: (1.648494795s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-587438 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-587438
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-587438: (1.195134746s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.42s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-587438
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-587438: (7.418056919s)
--- PASS: TestMountStart/serial/RestartStopped (8.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-587438 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-330736 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0904 21:16:41.018399  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-330736 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m19.414130057s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-330736 -- rollout status deployment/busybox: (5.365624301s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- exec busybox-7dff88458-dcbmg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- exec busybox-7dff88458-td2hr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- exec busybox-7dff88458-dcbmg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- exec busybox-7dff88458-td2hr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- exec busybox-7dff88458-dcbmg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- exec busybox-7dff88458-td2hr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- exec busybox-7dff88458-dcbmg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- exec busybox-7dff88458-dcbmg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- exec busybox-7dff88458-td2hr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-330736 -- exec busybox-7dff88458-td2hr -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.18s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-330736 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-330736 -v 3 --alsologtostderr: (29.015759619s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.69s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-330736 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 cp testdata/cp-test.txt multinode-330736:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 cp multinode-330736:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3817653067/001/cp-test_multinode-330736.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 cp multinode-330736:/home/docker/cp-test.txt multinode-330736-m02:/home/docker/cp-test_multinode-330736_multinode-330736-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736-m02 "sudo cat /home/docker/cp-test_multinode-330736_multinode-330736-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 cp multinode-330736:/home/docker/cp-test.txt multinode-330736-m03:/home/docker/cp-test_multinode-330736_multinode-330736-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736-m03 "sudo cat /home/docker/cp-test_multinode-330736_multinode-330736-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 cp testdata/cp-test.txt multinode-330736-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 cp multinode-330736-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3817653067/001/cp-test_multinode-330736-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 cp multinode-330736-m02:/home/docker/cp-test.txt multinode-330736:/home/docker/cp-test_multinode-330736-m02_multinode-330736.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736 "sudo cat /home/docker/cp-test_multinode-330736-m02_multinode-330736.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 cp multinode-330736-m02:/home/docker/cp-test.txt multinode-330736-m03:/home/docker/cp-test_multinode-330736-m02_multinode-330736-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736-m03 "sudo cat /home/docker/cp-test_multinode-330736-m02_multinode-330736-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 cp testdata/cp-test.txt multinode-330736-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 cp multinode-330736-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3817653067/001/cp-test_multinode-330736-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 cp multinode-330736-m03:/home/docker/cp-test.txt multinode-330736:/home/docker/cp-test_multinode-330736-m03_multinode-330736.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736 "sudo cat /home/docker/cp-test_multinode-330736-m03_multinode-330736.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 cp multinode-330736-m03:/home/docker/cp-test.txt multinode-330736-m02:/home/docker/cp-test_multinode-330736-m03_multinode-330736-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 ssh -n multinode-330736-m02 "sudo cat /home/docker/cp-test_multinode-330736-m03_multinode-330736-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-330736 node stop m03: (1.25811261s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-330736 status: exit status 7 (506.242772ms)

                                                
                                                
-- stdout --
	multinode-330736
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-330736-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-330736-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-330736 status --alsologtostderr: exit status 7 (498.550283ms)

                                                
                                                
-- stdout --
	multinode-330736
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-330736-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-330736-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:17:46.932249  831514 out.go:345] Setting OutFile to fd 1 ...
	I0904 21:17:46.932362  831514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 21:17:46.932370  831514 out.go:358] Setting ErrFile to fd 2...
	I0904 21:17:46.932375  831514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 21:17:46.932656  831514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
	I0904 21:17:46.932871  831514 out.go:352] Setting JSON to false
	I0904 21:17:46.932912  831514 mustload.go:65] Loading cluster: multinode-330736
	I0904 21:17:46.933001  831514 notify.go:220] Checking for updates...
	I0904 21:17:46.933312  831514 config.go:182] Loaded profile config "multinode-330736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:17:46.933323  831514 status.go:255] checking status of multinode-330736 ...
	I0904 21:17:46.933775  831514 cli_runner.go:164] Run: docker container inspect multinode-330736 --format={{.State.Status}}
	I0904 21:17:46.956176  831514 status.go:330] multinode-330736 host status = "Running" (err=<nil>)
	I0904 21:17:46.956198  831514 host.go:66] Checking if "multinode-330736" exists ...
	I0904 21:17:46.956538  831514 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-330736
	I0904 21:17:46.977005  831514 host.go:66] Checking if "multinode-330736" exists ...
	I0904 21:17:46.977341  831514 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:17:46.977392  831514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-330736
	I0904 21:17:46.994444  831514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33664 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/multinode-330736/id_rsa Username:docker}
	I0904 21:17:47.083080  831514 ssh_runner.go:195] Run: systemctl --version
	I0904 21:17:47.087588  831514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:17:47.099686  831514 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:17:47.165315  831514 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-04 21:17:47.154647438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 21:17:47.166031  831514 kubeconfig.go:125] found "multinode-330736" server: "https://192.168.67.2:8443"
	I0904 21:17:47.166067  831514 api_server.go:166] Checking apiserver status ...
	I0904 21:17:47.166121  831514 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 21:17:47.177568  831514 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1358/cgroup
	I0904 21:17:47.187837  831514 api_server.go:182] apiserver freezer: "7:freezer:/docker/e827649d011195207bf85c4847ce048d6a9c8464f3a57acfad02514825137b18/crio/crio-9b3b4bfdfe0d1729d4b90327b0d2606e4efd1638352b340f54b2cd0248895ac3"
	I0904 21:17:47.187917  831514 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e827649d011195207bf85c4847ce048d6a9c8464f3a57acfad02514825137b18/crio/crio-9b3b4bfdfe0d1729d4b90327b0d2606e4efd1638352b340f54b2cd0248895ac3/freezer.state
	I0904 21:17:47.197137  831514 api_server.go:204] freezer state: "THAWED"
	I0904 21:17:47.197164  831514 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0904 21:17:47.206725  831514 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0904 21:17:47.206756  831514 status.go:422] multinode-330736 apiserver status = Running (err=<nil>)
	I0904 21:17:47.206772  831514 status.go:257] multinode-330736 status: &{Name:multinode-330736 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:17:47.206790  831514 status.go:255] checking status of multinode-330736-m02 ...
	I0904 21:17:47.207092  831514 cli_runner.go:164] Run: docker container inspect multinode-330736-m02 --format={{.State.Status}}
	I0904 21:17:47.225136  831514 status.go:330] multinode-330736-m02 host status = "Running" (err=<nil>)
	I0904 21:17:47.225163  831514 host.go:66] Checking if "multinode-330736-m02" exists ...
	I0904 21:17:47.225490  831514 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-330736-m02
	I0904 21:17:47.242831  831514 host.go:66] Checking if "multinode-330736-m02" exists ...
	I0904 21:17:47.243159  831514 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 21:17:47.243205  831514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-330736-m02
	I0904 21:17:47.259720  831514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33669 SSHKeyPath:/home/jenkins/minikube-integration/19575-710603/.minikube/machines/multinode-330736-m02/id_rsa Username:docker}
	I0904 21:17:47.347059  831514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 21:17:47.359014  831514 status.go:257] multinode-330736-m02 status: &{Name:multinode-330736-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:17:47.359060  831514 status.go:255] checking status of multinode-330736-m03 ...
	I0904 21:17:47.359397  831514 cli_runner.go:164] Run: docker container inspect multinode-330736-m03 --format={{.State.Status}}
	I0904 21:17:47.376708  831514 status.go:330] multinode-330736-m03 host status = "Stopped" (err=<nil>)
	I0904 21:17:47.376735  831514 status.go:343] host is not running, skipping remaining checks
	I0904 21:17:47.376743  831514 status.go:257] multinode-330736-m03 status: &{Name:multinode-330736-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-330736 node start m03 -v=7 --alsologtostderr: (8.856805407s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (101.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-330736
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-330736
E0904 21:18:04.086001  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-330736: (24.911013097s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-330736 --wait=true -v=8 --alsologtostderr
E0904 21:18:24.397540  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-330736 --wait=true -v=8 --alsologtostderr: (1m16.332371336s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-330736
--- PASS: TestMultiNode/serial/RestartKeepsNodes (101.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-330736 node delete m03: (4.798023781s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.45s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-330736 stop: (23.877811388s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-330736 status: exit status 7 (89.228646ms)

                                                
                                                
-- stdout --
	multinode-330736
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-330736-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-330736 status --alsologtostderr: exit status 7 (89.369245ms)

                                                
                                                
-- stdout --
	multinode-330736
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-330736-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:20:07.795502  839305 out.go:345] Setting OutFile to fd 1 ...
	I0904 21:20:07.795681  839305 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 21:20:07.795690  839305 out.go:358] Setting ErrFile to fd 2...
	I0904 21:20:07.795696  839305 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 21:20:07.795939  839305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
	I0904 21:20:07.796127  839305 out.go:352] Setting JSON to false
	I0904 21:20:07.796171  839305 mustload.go:65] Loading cluster: multinode-330736
	I0904 21:20:07.796257  839305 notify.go:220] Checking for updates...
	I0904 21:20:07.796579  839305 config.go:182] Loaded profile config "multinode-330736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:20:07.796594  839305 status.go:255] checking status of multinode-330736 ...
	I0904 21:20:07.797090  839305 cli_runner.go:164] Run: docker container inspect multinode-330736 --format={{.State.Status}}
	I0904 21:20:07.815917  839305 status.go:330] multinode-330736 host status = "Stopped" (err=<nil>)
	I0904 21:20:07.815937  839305 status.go:343] host is not running, skipping remaining checks
	I0904 21:20:07.815945  839305 status.go:257] multinode-330736 status: &{Name:multinode-330736 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 21:20:07.815984  839305 status.go:255] checking status of multinode-330736-m02 ...
	I0904 21:20:07.816366  839305 cli_runner.go:164] Run: docker container inspect multinode-330736-m02 --format={{.State.Status}}
	I0904 21:20:07.835520  839305 status.go:330] multinode-330736-m02 host status = "Stopped" (err=<nil>)
	I0904 21:20:07.835541  839305 status.go:343] host is not running, skipping remaining checks
	I0904 21:20:07.835548  839305 status.go:257] multinode-330736-m02 status: &{Name:multinode-330736-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-330736 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-330736 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (45.879597841s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-330736 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-330736
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-330736-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-330736-m02 --driver=docker  --container-runtime=crio: exit status 14 (77.027408ms)

                                                
                                                
-- stdout --
	* [multinode-330736-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-330736-m02' is duplicated with machine name 'multinode-330736-m02' in profile 'multinode-330736'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-330736-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-330736-m03 --driver=docker  --container-runtime=crio: (32.841281914s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-330736
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-330736: exit status 80 (310.287226ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-330736 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-330736-m03 already exists in multinode-330736-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-330736-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-330736-m03: (1.971802596s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.25s)

                                                
                                    
x
+
TestPreload (129.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-127424 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0904 21:21:41.018142  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-127424 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.044787133s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-127424 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-127424 image pull gcr.io/k8s-minikube/busybox: (3.083315908s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-127424
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-127424: (5.834830143s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-127424 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0904 21:23:24.398575  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-127424 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.891168402s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-127424 image list
helpers_test.go:175: Cleaning up "test-preload-127424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-127424
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-127424: (2.443863908s)
--- PASS: TestPreload (129.56s)

                                                
                                    
x
+
TestScheduledStopUnix (109.39s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-762056 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-762056 --memory=2048 --driver=docker  --container-runtime=crio: (33.457097858s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-762056 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-762056 -n scheduled-stop-762056
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-762056 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-762056 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-762056 -n scheduled-stop-762056
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-762056
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-762056 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-762056
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-762056: exit status 7 (71.181907ms)

                                                
                                                
-- stdout --
	scheduled-stop-762056
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-762056 -n scheduled-stop-762056
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-762056 -n scheduled-stop-762056: exit status 7 (66.93379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-762056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-762056
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-762056: (4.411571531s)
--- PASS: TestScheduledStopUnix (109.39s)

                                                
                                    
x
+
TestInsufficientStorage (10.84s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-105974 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-105974 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.300494077s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5b98b3cb-5eaf-42ef-ae56-484e95f28b07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-105974] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0990cb81-a83e-48c0-adc6-47d36ca531af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19575"}}
	{"specversion":"1.0","id":"85cd58f5-7b66-439c-87ae-51c704b55c2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f335659f-f6a9-4112-966f-03231c84d874","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig"}}
	{"specversion":"1.0","id":"c94d1762-76da-4497-8fec-e217474f0860","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube"}}
	{"specversion":"1.0","id":"be9dc09a-5ed3-4ca9-a675-dd2b663181a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b8ca6511-67af-44dc-8db4-6015c5b4a1ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a17b095d-7e33-4ce7-a776-46eb6d0fd043","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e9bbc119-6d19-45c3-aa21-c6e8d6259d4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"496ed89f-b6e3-4618-8714-910a8ad2c9cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"be070bb2-f57b-4bb6-bc05-d4bcded9c293","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"82577762-5359-4b1f-8e93-0ebbd0958075","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-105974\" primary control-plane node in \"insufficient-storage-105974\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8776992-9ac5-43f8-bee3-2336e47affce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd271f1f-c928-4e4e-aa8a-64034d759f72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"77c43569-0f1e-4360-872a-c1e027ab4d0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-105974 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-105974 --output=json --layout=cluster: exit status 7 (287.525707ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-105974","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-105974","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 21:25:41.372519  857079 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-105974" does not appear in /home/jenkins/minikube-integration/19575-710603/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-105974 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-105974 --output=json --layout=cluster: exit status 7 (280.868332ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-105974","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-105974","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 21:25:41.654948  857139 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-105974" does not appear in /home/jenkins/minikube-integration/19575-710603/kubeconfig
	E0904 21:25:41.665738  857139 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/insufficient-storage-105974/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-105974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-105974
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-105974: (1.966182559s)
--- PASS: TestInsufficientStorage (10.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (96.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3857281112 start -p running-upgrade-480621 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3857281112 start -p running-upgrade-480621 --memory=2200 --vm-driver=docker  --container-runtime=crio: (43.207746501s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-480621 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-480621 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.76918752s)
helpers_test.go:175: Cleaning up "running-upgrade-480621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-480621
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-480621: (3.209528682s)
--- PASS: TestRunningBinaryUpgrade (96.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (242.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-356328 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-356328 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m11.974263242s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-356328
E0904 21:28:24.397792  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-356328: (1.357704566s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-356328 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-356328 status --format={{.Host}}: exit status 7 (124.724026ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-356328 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-356328 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m10.298386102s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-356328 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-356328 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-356328 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (108.813237ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-356328] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-356328
	    minikube start -p kubernetes-upgrade-356328 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3563282 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-356328 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-356328 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-356328 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.158199666s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-356328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-356328
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-356328: (3.013485826s)
--- PASS: TestKubernetesUpgrade (242.20s)

                                                
                                    
x
+
TestMissingContainerUpgrade (164.18s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.527903912 start -p missing-upgrade-089018 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.527903912 start -p missing-upgrade-089018 --memory=2200 --driver=docker  --container-runtime=crio: (1m36.721260438s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-089018
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-089018: (1.799264355s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-089018
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-089018 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-089018 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m2.719816519s)
helpers_test.go:175: Cleaning up "missing-upgrade-089018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-089018
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-089018: (2.086727475s)
--- PASS: TestMissingContainerUpgrade (164.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482971 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-482971 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (77.169849ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-482971] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482971 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-482971 --driver=docker  --container-runtime=crio: (41.823406088s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-482971 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482971 --no-kubernetes --driver=docker  --container-runtime=crio
E0904 21:26:41.018163  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-482971 --no-kubernetes --driver=docker  --container-runtime=crio: (20.474332944s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-482971 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-482971 status -o json: exit status 2 (344.85783ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-482971","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-482971
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-482971: (2.043807016s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482971 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-482971 --no-kubernetes --driver=docker  --container-runtime=crio: (9.552411972s)
--- PASS: TestNoKubernetes/serial/Start (9.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-482971 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-482971 "sudo systemctl is-active --quiet service kubelet": exit status 1 (374.681054ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-482971
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-482971: (1.52279786s)
--- PASS: TestNoKubernetes/serial/Stop (1.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482971 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-482971 --driver=docker  --container-runtime=crio: (7.555367826s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-482971 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-482971 "sudo systemctl is-active --quiet service kubelet": exit status 1 (401.933619ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3481446051 start -p stopped-upgrade-970328 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3481446051 start -p stopped-upgrade-970328 --memory=2200 --vm-driver=docker  --container-runtime=crio: (48.752459389s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3481446051 -p stopped-upgrade-970328 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3481446051 -p stopped-upgrade-970328 stop: (2.51607778s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-970328 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-970328 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.819835591s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (99.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-970328
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-970328: (1.324068871s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                    
x
+
TestPause/serial/Start (59.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-787691 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0904 21:31:27.466721  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:31:41.018535  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-787691 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (59.726713872s)
--- PASS: TestPause/serial/Start (59.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.78s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-787691 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-787691 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.756646709s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-022672 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-022672 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (177.597459ms)

                                                
                                                
-- stdout --
	* [false-022672] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19575
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 21:32:31.069556  893162 out.go:345] Setting OutFile to fd 1 ...
	I0904 21:32:31.069693  893162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 21:32:31.069729  893162 out.go:358] Setting ErrFile to fd 2...
	I0904 21:32:31.069743  893162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 21:32:31.070022  893162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-710603/.minikube/bin
	I0904 21:32:31.070492  893162 out.go:352] Setting JSON to false
	I0904 21:32:31.071439  893162 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18901,"bootTime":1725466650,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0904 21:32:31.071520  893162 start.go:139] virtualization:  
	I0904 21:32:31.073884  893162 out.go:177] * [false-022672] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0904 21:32:31.075905  893162 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 21:32:31.075982  893162 notify.go:220] Checking for updates...
	I0904 21:32:31.079747  893162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 21:32:31.081530  893162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19575-710603/kubeconfig
	I0904 21:32:31.083402  893162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-710603/.minikube
	I0904 21:32:31.085305  893162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0904 21:32:31.087573  893162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 21:32:31.091114  893162 config.go:182] Loaded profile config "pause-787691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0904 21:32:31.091210  893162 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 21:32:31.118747  893162 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0904 21:32:31.118884  893162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 21:32:31.180200  893162 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-04 21:32:31.169628074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0904 21:32:31.180326  893162 docker.go:307] overlay module found
	I0904 21:32:31.182551  893162 out.go:177] * Using the docker driver based on user configuration
	I0904 21:32:31.184243  893162 start.go:297] selected driver: docker
	I0904 21:32:31.184275  893162 start.go:901] validating driver "docker" against <nil>
	I0904 21:32:31.184291  893162 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 21:32:31.186858  893162 out.go:201] 
	W0904 21:32:31.188856  893162 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0904 21:32:31.190987  893162 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-022672 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-022672

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-022672

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-022672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-022672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-022672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-022672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-022672

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-022672

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-022672

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-022672

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-022672

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-022672" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-022672" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 04 Sep 2024 21:31:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-787691
contexts:
- context:
cluster: pause-787691
extensions:
- extension:
last-update: Wed, 04 Sep 2024 21:31:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-787691
name: pause-787691
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-787691
user:
client-certificate: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/pause-787691/client.crt
client-key: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/pause-787691/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-022672

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-022672"

                                                
                                                
----------------------- debugLogs end: false-022672 [took: 3.682892944s] --------------------------------
helpers_test.go:175: Cleaning up "false-022672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-022672
--- PASS: TestNetworkPlugins/group/false (4.07s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-787691 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-787691 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-787691 --output=json --layout=cluster: exit status 2 (399.385886ms)

                                                
                                                
-- stdout --
	{"Name":"pause-787691","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-787691","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-787691 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.19s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-787691 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-787691 --alsologtostderr -v=5: (1.194878082s)
--- PASS: TestPause/serial/PauseAgain (1.19s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.24s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-787691 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-787691 --alsologtostderr -v=5: (3.236148287s)
--- PASS: TestPause/serial/DeletePaused (3.24s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (5.92s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (5.860569218s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-787691
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-787691: exit status 1 (16.287945ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-787691: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (5.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (165.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-879294 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0904 21:34:44.087304  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:36:41.030869  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-879294 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m45.745792352s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (165.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-879294 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8360cdbe-8b6f-40b9-a855-b70f12e60aeb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8360cdbe-8b6f-40b9-a855-b70f12e60aeb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003105033s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-879294 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-879294 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-879294 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-879294 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-879294 --alsologtostderr -v=3: (12.190413922s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-879294 -n old-k8s-version-879294
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-879294 -n old-k8s-version-879294: exit status 7 (91.097511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-879294 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (153.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-879294 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-879294 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m33.391227228s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-879294 -n old-k8s-version-879294
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (153.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-690659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-690659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (1m14.168095941s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-690659 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [91cc3500-323a-440a-8851-42b1cdc1de1a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0904 21:38:24.397917  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [91cc3500-323a-440a-8851-42b1cdc1de1a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004628529s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-690659 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-690659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-690659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.03707623s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-690659 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-690659 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-690659 --alsologtostderr -v=3: (12.098090053s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-690659 -n no-preload-690659
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-690659 -n no-preload-690659: exit status 7 (75.567356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-690659 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-690659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-690659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m28.191740862s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-690659 -n no-preload-690659
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pm8l9" [0fa95945-09bc-4891-8e5b-f1aecdc38353] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.011550783s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pm8l9" [0fa95945-09bc-4891-8e5b-f1aecdc38353] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004294441s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-879294 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-879294 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-879294 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-879294 -n old-k8s-version-879294
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-879294 -n old-k8s-version-879294: exit status 2 (308.33102ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-879294 -n old-k8s-version-879294
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-879294 -n old-k8s-version-879294: exit status 2 (315.816712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-879294 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-879294 -n old-k8s-version-879294
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-879294 -n old-k8s-version-879294
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-738412 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-738412 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (50.834061559s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (50.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-738412 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [79bcbb6b-404c-4f60-890c-9779f1205018] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [79bcbb6b-404c-4f60-890c-9779f1205018] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.023047236s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-738412 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-738412 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-738412 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-738412 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-738412 --alsologtostderr -v=3: (12.021664193s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-738412 -n embed-certs-738412
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-738412 -n embed-certs-738412: exit status 7 (78.384878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-738412 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (274.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-738412 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0904 21:41:41.017998  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:41:42.351583  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:41:42.358287  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:41:42.369769  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:41:42.391312  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:41:42.432748  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:41:42.514290  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:41:42.675972  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:41:42.998315  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:41:43.640494  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:41:44.922119  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:41:47.484168  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:41:52.605523  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:42:02.847748  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:42:23.329122  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:43:04.291516  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-738412 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m34.091470243s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-738412 -n embed-certs-738412
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (274.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vvg6z" [10b32d09-2de0-4695-9b9a-a7610e064754] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00412067s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vvg6z" [10b32d09-2de0-4695-9b9a-a7610e064754] Running
E0904 21:43:24.397510  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00395753s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-690659 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-690659 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-690659 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-690659 -n no-preload-690659
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-690659 -n no-preload-690659: exit status 2 (359.191722ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-690659 -n no-preload-690659
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-690659 -n no-preload-690659: exit status 2 (339.276805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-690659 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-690659 -n no-preload-690659
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-690659 -n no-preload-690659
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-737883 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-737883 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (51.519982985s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-737883 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [61317e15-3b76-4bfe-8d4a-a101661a714a] Pending
E0904 21:44:26.214260  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [61317e15-3b76-4bfe-8d4a-a101661a714a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [61317e15-3b76-4bfe-8d4a-a101661a714a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004436355s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-737883 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-737883 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-737883 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-737883 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-737883 --alsologtostderr -v=3: (11.963938767s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-737883 -n default-k8s-diff-port-737883
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-737883 -n default-k8s-diff-port-737883: exit status 7 (69.417205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-737883 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (296.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-737883 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-737883 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m55.87425432s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-737883 -n default-k8s-diff-port-737883
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (296.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xtc6r" [edefbeda-8c4e-4ec1-a4c6-2bca24c0951d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004256587s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xtc6r" [edefbeda-8c4e-4ec1-a4c6-2bca24c0951d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004313717s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-738412 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-738412 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-738412 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-738412 --alsologtostderr -v=1: (1.169537998s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-738412 -n embed-certs-738412
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-738412 -n embed-certs-738412: exit status 2 (345.603355ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-738412 -n embed-certs-738412
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-738412 -n embed-certs-738412: exit status 2 (338.248287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-738412 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-738412 -n embed-certs-738412
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-738412 -n embed-certs-738412
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-230356 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0904 21:46:41.018421  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-230356 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (34.857037439s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-230356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0904 21:46:42.350729  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-230356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.29874568s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-230356 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-230356 --alsologtostderr -v=3: (1.294312035s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-230356 -n newest-cni-230356
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-230356 -n newest-cni-230356: exit status 7 (70.793377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-230356 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-230356 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-230356 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (15.831872254s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-230356 -n newest-cni-230356
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-230356 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-230356 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-230356 -n newest-cni-230356
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-230356 -n newest-cni-230356: exit status 2 (340.045515ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-230356 -n newest-cni-230356
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-230356 -n newest-cni-230356: exit status 2 (312.863679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-230356 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-230356 -n newest-cni-230356
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-230356 -n newest-cni-230356
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0904 21:47:10.055909  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/old-k8s-version-879294/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (52.289738686s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-022672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-022672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-58zn9" [d544f115-c0af-418f-861c-feea53e768a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-58zn9" [d544f115-c0af-418f-861c-feea53e768a1] Running
E0904 21:48:07.468542  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/addons-057989/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004614293s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-022672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0904 21:48:33.702749  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/no-preload-690659/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:48:43.944068  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/no-preload-690659/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:49:04.425740  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/no-preload-690659/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (51.636982256s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-zh8f2" [3fb9a168-f130-492b-8d01-353557a9e02d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004052741s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-022672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-022672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vqbn2" [7786f688-eefd-432a-9fea-9181757158e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vqbn2" [7786f688-eefd-432a-9fea-9181757158e6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004130074s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-022672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jfq8l" [547144cf-d2f6-4e02-bdcd-178f529a06e9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003952458s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jfq8l" [547144cf-d2f6-4e02-bdcd-178f529a06e9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004021038s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-737883 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-737883 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-737883 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-737883 --alsologtostderr -v=1: (1.086047104s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-737883 -n default-k8s-diff-port-737883
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-737883 -n default-k8s-diff-port-737883: exit status 2 (390.623841ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-737883 -n default-k8s-diff-port-737883
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-737883 -n default-k8s-diff-port-737883: exit status 2 (439.87548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-737883 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-737883 --alsologtostderr -v=1: (1.635208513s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-737883 -n default-k8s-diff-port-737883
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-737883 -n default-k8s-diff-port-737883
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.49s)
E0904 21:53:51.150918  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/no-preload-690659/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:21.057211  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/auto-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:21.953526  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/kindnet-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:21.959851  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/kindnet-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:21.971394  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/kindnet-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:21.992839  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/kindnet-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:22.034578  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/kindnet-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:22.116183  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/kindnet-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:22.277795  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/kindnet-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:22.599781  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/kindnet-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:23.241797  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/kindnet-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:24.524104  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/kindnet-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:25.267039  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/default-k8s-diff-port-737883/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:25.273390  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/default-k8s-diff-port-737883/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:25.284775  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/default-k8s-diff-port-737883/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:25.306248  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/default-k8s-diff-port-737883/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:25.347744  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/default-k8s-diff-port-737883/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:25.429231  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/default-k8s-diff-port-737883/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:25.590610  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/default-k8s-diff-port-737883/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:25.912570  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/default-k8s-diff-port-737883/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:26.554976  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/default-k8s-diff-port-737883/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:27.085463  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/kindnet-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:27.836491  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/default-k8s-diff-port-737883/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:30.397834  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/default-k8s-diff-port-737883/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:32.207874  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/kindnet-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:35.519950  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/default-k8s-diff-port-737883/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m12.839639208s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0904 21:51:07.308670  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/no-preload-690659/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.866411072s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nghc2" [91f5b34d-773a-42b2-a2b4-a6a1e8fdd07a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006466177s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-022672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-022672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rfmqk" [046152e1-7b92-4ab5-b3b8-12e02e61f33a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rfmqk" [046152e1-7b92-4ab5-b3b8-12e02e61f33a] Running
E0904 21:51:24.089448  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/functional-262207/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004323344s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-022672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-022672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-44sbq" [ee0e9dbc-b476-459b-a823-cc2907bc214a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-44sbq" [ee0e9dbc-b476-459b-a823-cc2907bc214a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004109997s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-022672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-022672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (51.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (51.682384303s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (51.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (59.293335615s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-022672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-022672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-thcpk" [36801ba6-aecf-49ba-a7fc-5b96213a3218] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-thcpk" [36801ba6-aecf-49ba-a7fc-5b96213a3218] Running
E0904 21:52:59.099230  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/auto-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:52:59.105673  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/auto-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:52:59.117043  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/auto-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:52:59.138375  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/auto-022672/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.00446613s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-022672 exec deployment/netcat -- nslookup kubernetes.default
E0904 21:52:59.179994  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/auto-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:52:59.261343  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/auto-022672/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0904 21:52:59.424184  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/auto-022672/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
E0904 21:53:01.676731  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/auto-022672/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kube-flannel-ds-xw5gw" [c601bab4-ea05-4228-bd09-2cdab04d818d] Running
E0904 21:53:04.238808  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/auto-022672/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004382312s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-022672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-022672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tzrzk" [36831ed8-35d2-417d-9524-6fbec20bb0c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0904 21:53:09.361212  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/auto-022672/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-tzrzk" [36831ed8-35d2-417d-9524-6fbec20bb0c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004917037s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-022672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-022672 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m20.17532121s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-022672 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-022672 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2kbb7" [5fac93ef-aa23-4e78-b85c-11478558e110] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0904 21:54:42.449416  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/kindnet-022672/client.crt: no such file or directory" logger="UnhandledError"
E0904 21:54:45.762152  715981 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/default-k8s-diff-port-737883/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-2kbb7" [5fac93ef-aa23-4e78-b85c-11478558e110] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004074584s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-022672 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-022672 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (30/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-053885 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-053885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-053885
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-133036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-133036
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-022672 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-022672

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-022672

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-022672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-022672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-022672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-022672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-022672

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-022672

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-022672

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-022672

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-022672

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-022672" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-022672" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 04 Sep 2024 21:31:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-787691
contexts:
- context:
cluster: pause-787691
extensions:
- extension:
last-update: Wed, 04 Sep 2024 21:31:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-787691
name: pause-787691
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-787691
user:
client-certificate: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/pause-787691/client.crt
client-key: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/pause-787691/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-022672

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-022672"

                                                
                                                
----------------------- debugLogs end: kubenet-022672 [took: 3.345284602s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-022672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-022672
--- SKIP: TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-022672 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-022672" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19575-710603/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 04 Sep 2024 21:32:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-787691
contexts:
- context:
cluster: pause-787691
extensions:
- extension:
last-update: Wed, 04 Sep 2024 21:32:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-787691
name: pause-787691
current-context: pause-787691
kind: Config
preferences: {}
users:
- name: pause-787691
user:
client-certificate: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/pause-787691/client.crt
client-key: /home/jenkins/minikube-integration/19575-710603/.minikube/profiles/pause-787691/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-022672

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-022672" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-022672"

                                                
                                                
----------------------- debugLogs end: cilium-022672 [took: 5.234469614s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-022672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-022672
--- SKIP: TestNetworkPlugins/group/cilium (5.46s)

                                                
                                    
Copied to clipboard