Test Report: Docker_Linux_crio_arm64 18703

                    
                      817bcb10c8415237264ed1ad2e32746beadbf0a3:2024-04-20:34116
                    
                

Test fail (4/327)

Order failed test Duration
30 TestAddons/parallel/Ingress 168.64
32 TestAddons/parallel/MetricsServer 342.2
36 TestAddons/parallel/Headlamp 3.02
173 TestMultiControlPlane/serial/RestartCluster 123.58
x
+
TestAddons/parallel/Ingress (168.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-747503 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-747503 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-747503 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ef94d41d-bf78-46f9-80ca-8d83499cf88f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ef94d41d-bf78-46f9-80ca-8d83499cf88f] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003478712s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-747503 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.0158941s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-747503 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.071764342s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-747503 addons disable ingress-dns --alsologtostderr -v=1: (1.078269373s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-747503 addons disable ingress --alsologtostderr -v=1: (7.77597961s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-747503
helpers_test.go:235: (dbg) docker inspect addons-747503:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56",
	        "Created": "2024-04-20T00:46:38.106832296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1644719,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-20T00:46:38.423221308Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3b2d88ca3ca9b0dbaf60124ea2550b937bd64c7063d7cb640718ddb37cba13b1",
	        "ResolvConfPath": "/var/lib/docker/containers/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56/hostname",
	        "HostsPath": "/var/lib/docker/containers/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56/hosts",
	        "LogPath": "/var/lib/docker/containers/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56-json.log",
	        "Name": "/addons-747503",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-747503:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-747503",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/11f58f8159fe4bcc9c388790d75da6c438cdd6b1e64ec9931ba42d5522190542-init/diff:/var/lib/docker/overlay2/e0471a8635b1d2c4e15ee92afa46c7d34f76188a5b6aa3cb3689b7cec908b9a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11f58f8159fe4bcc9c388790d75da6c438cdd6b1e64ec9931ba42d5522190542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11f58f8159fe4bcc9c388790d75da6c438cdd6b1e64ec9931ba42d5522190542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11f58f8159fe4bcc9c388790d75da6c438cdd6b1e64ec9931ba42d5522190542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-747503",
	                "Source": "/var/lib/docker/volumes/addons-747503/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-747503",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-747503",
	                "name.minikube.sigs.k8s.io": "addons-747503",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2bf060fd849aa8a792c66482994fdba957bcf5fad9bd2decda24bd7d8500a7b5",
	            "SandboxKey": "/var/run/docker/netns/2bf060fd849a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34675"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34674"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34671"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34673"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34672"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-747503": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "64e1715d5e750e9daed359ac38e3073a5c93c82f8a5daf2e135f2d0b5be8da62",
	                    "EndpointID": "31ed3dc6d507db832465fc3d5d178d5ab6552b0ea16ea63ec1d876b06129484e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-747503",
	                        "038fb1234c5e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-747503 -n addons-747503
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-747503 logs -n 25: (1.477258563s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-161385   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | -p download-only-161385                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| delete  | -p download-only-161385                                                                     | download-only-161385   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| delete  | -p download-only-784633                                                                     | download-only-784633   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| delete  | -p download-only-161385                                                                     | download-only-161385   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| start   | --download-only -p                                                                          | download-docker-407942 | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | download-docker-407942                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-407942                                                                   | download-docker-407942 | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-562090   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | binary-mirror-562090                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39787                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-562090                                                                     | binary-mirror-562090   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| addons  | enable dashboard -p                                                                         | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | addons-747503                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | addons-747503                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-747503 --wait=true                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-747503 ip                                                                            | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:49 UTC | 20 Apr 24 00:49 UTC |
	| addons  | addons-747503 addons disable                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:49 UTC | 20 Apr 24 00:49 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | -p addons-747503                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-747503 ssh cat                                                                       | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | /opt/local-path-provisioner/pvc-b29b3cd7-c850-4a4e-b0ba-8a8cc403a41d_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-747503 addons disable                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-747503 addons                                                                        | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-747503 addons                                                                        | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | addons-747503                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC |                     |
	|         | -p addons-747503                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-747503 ssh curl -s                                                                   | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:51 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-747503 ip                                                                            | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:53 UTC | 20 Apr 24 00:53 UTC |
	| addons  | addons-747503 addons disable                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:53 UTC | 20 Apr 24 00:53 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-747503 addons disable                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:53 UTC | 20 Apr 24 00:53 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 00:46:14
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 00:46:14.607015 1644261 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:46:14.607178 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:46:14.607208 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:46:14.607226 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:46:14.607498 1644261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	I0420 00:46:14.607984 1644261 out.go:298] Setting JSON to false
	I0420 00:46:14.608870 1644261 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":26921,"bootTime":1713547053,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0420 00:46:14.608940 1644261 start.go:139] virtualization:  
	I0420 00:46:14.612689 1644261 out.go:177] * [addons-747503] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0420 00:46:14.614357 1644261 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 00:46:14.616082 1644261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 00:46:14.614429 1644261 notify.go:220] Checking for updates...
	I0420 00:46:14.619849 1644261 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 00:46:14.621777 1644261 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	I0420 00:46:14.623523 1644261 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0420 00:46:14.625229 1644261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 00:46:14.627320 1644261 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 00:46:14.645723 1644261 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0420 00:46:14.645835 1644261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 00:46:14.712118 1644261 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-20 00:46:14.700723825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 00:46:14.712238 1644261 docker.go:295] overlay module found
	I0420 00:46:14.714333 1644261 out.go:177] * Using the docker driver based on user configuration
	I0420 00:46:14.715905 1644261 start.go:297] selected driver: docker
	I0420 00:46:14.715921 1644261 start.go:901] validating driver "docker" against <nil>
	I0420 00:46:14.715934 1644261 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 00:46:14.716574 1644261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 00:46:14.765511 1644261 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-20 00:46:14.755476473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 00:46:14.765687 1644261 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0420 00:46:14.765914 1644261 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:46:14.767783 1644261 out.go:177] * Using Docker driver with root privileges
	I0420 00:46:14.769372 1644261 cni.go:84] Creating CNI manager for ""
	I0420 00:46:14.769396 1644261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0420 00:46:14.769406 1644261 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0420 00:46:14.769486 1644261 start.go:340] cluster config:
	{Name:addons-747503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0420 00:46:14.771617 1644261 out.go:177] * Starting "addons-747503" primary control-plane node in "addons-747503" cluster
	I0420 00:46:14.773185 1644261 cache.go:121] Beginning downloading kic base image for docker with crio
	I0420 00:46:14.774855 1644261 out.go:177] * Pulling base image v0.0.43 ...
	I0420 00:46:14.776595 1644261 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:46:14.776634 1644261 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local docker daemon
	I0420 00:46:14.776648 1644261 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0420 00:46:14.776672 1644261 cache.go:56] Caching tarball of preloaded images
	I0420 00:46:14.776753 1644261 preload.go:173] Found /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0420 00:46:14.776764 1644261 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 00:46:14.777129 1644261 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/config.json ...
	I0420 00:46:14.777263 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/config.json: {Name:mkc5932488b9adc511b83497f974c2edc34e9770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:14.789608 1644261 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 to local cache
	I0420 00:46:14.789711 1644261 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local cache directory
	I0420 00:46:14.789728 1644261 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local cache directory, skipping pull
	I0420 00:46:14.789733 1644261 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 exists in cache, skipping pull
	I0420 00:46:14.789741 1644261 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 as a tarball
	I0420 00:46:14.789746 1644261 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 from local cache
	I0420 00:46:31.319259 1644261 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 from cached tarball
	I0420 00:46:31.319302 1644261 cache.go:194] Successfully downloaded all kic artifacts
	I0420 00:46:31.319332 1644261 start.go:360] acquireMachinesLock for addons-747503: {Name:mk90f80baada2f8c104726bc92d1956d63d494dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:46:31.319827 1644261 start.go:364] duration metric: took 471.731µs to acquireMachinesLock for "addons-747503"
	I0420 00:46:31.319867 1644261 start.go:93] Provisioning new machine with config: &{Name:addons-747503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:46:31.319953 1644261 start.go:125] createHost starting for "" (driver="docker")
	I0420 00:46:31.322194 1644261 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0420 00:46:31.322447 1644261 start.go:159] libmachine.API.Create for "addons-747503" (driver="docker")
	I0420 00:46:31.322484 1644261 client.go:168] LocalClient.Create starting
	I0420 00:46:31.322598 1644261 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem
	I0420 00:46:31.615216 1644261 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem
	I0420 00:46:31.818172 1644261 cli_runner.go:164] Run: docker network inspect addons-747503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0420 00:46:31.832341 1644261 cli_runner.go:211] docker network inspect addons-747503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0420 00:46:31.832434 1644261 network_create.go:281] running [docker network inspect addons-747503] to gather additional debugging logs...
	I0420 00:46:31.832456 1644261 cli_runner.go:164] Run: docker network inspect addons-747503
	W0420 00:46:31.845135 1644261 cli_runner.go:211] docker network inspect addons-747503 returned with exit code 1
	I0420 00:46:31.845171 1644261 network_create.go:284] error running [docker network inspect addons-747503]: docker network inspect addons-747503: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-747503 not found
	I0420 00:46:31.845184 1644261 network_create.go:286] output of [docker network inspect addons-747503]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-747503 not found
	
	** /stderr **
	I0420 00:46:31.845292 1644261 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0420 00:46:31.858385 1644261 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024d75e0}
	I0420 00:46:31.858427 1644261 network_create.go:124] attempt to create docker network addons-747503 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0420 00:46:31.858487 1644261 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-747503 addons-747503
	I0420 00:46:31.918669 1644261 network_create.go:108] docker network addons-747503 192.168.49.0/24 created
	I0420 00:46:31.918704 1644261 kic.go:121] calculated static IP "192.168.49.2" for the "addons-747503" container
	I0420 00:46:31.918779 1644261 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0420 00:46:31.932121 1644261 cli_runner.go:164] Run: docker volume create addons-747503 --label name.minikube.sigs.k8s.io=addons-747503 --label created_by.minikube.sigs.k8s.io=true
	I0420 00:46:31.946137 1644261 oci.go:103] Successfully created a docker volume addons-747503
	I0420 00:46:31.946230 1644261 cli_runner.go:164] Run: docker run --rm --name addons-747503-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747503 --entrypoint /usr/bin/test -v addons-747503:/var gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 -d /var/lib
	I0420 00:46:33.904376 1644261 cli_runner.go:217] Completed: docker run --rm --name addons-747503-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747503 --entrypoint /usr/bin/test -v addons-747503:/var gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 -d /var/lib: (1.958105111s)
	I0420 00:46:33.904409 1644261 oci.go:107] Successfully prepared a docker volume addons-747503
	I0420 00:46:33.904447 1644261 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:46:33.904466 1644261 kic.go:194] Starting extracting preloaded images to volume ...
	I0420 00:46:33.904548 1644261 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-747503:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 -I lz4 -xf /preloaded.tar -C /extractDir
	I0420 00:46:38.033459 1644261 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-747503:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 -I lz4 -xf /preloaded.tar -C /extractDir: (4.128855513s)
	I0420 00:46:38.033498 1644261 kic.go:203] duration metric: took 4.129027815s to extract preloaded images to volume ...
	W0420 00:46:38.033666 1644261 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0420 00:46:38.033783 1644261 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0420 00:46:38.092961 1644261 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-747503 --name addons-747503 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747503 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-747503 --network addons-747503 --ip 192.168.49.2 --volume addons-747503:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737
	I0420 00:46:38.431321 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Running}}
	I0420 00:46:38.449111 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:46:38.473100 1644261 cli_runner.go:164] Run: docker exec addons-747503 stat /var/lib/dpkg/alternatives/iptables
	I0420 00:46:38.539136 1644261 oci.go:144] the created container "addons-747503" has a running status.
	I0420 00:46:38.539177 1644261 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa...
	I0420 00:46:38.988697 1644261 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0420 00:46:39.013673 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:46:39.036196 1644261 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0420 00:46:39.036217 1644261 kic_runner.go:114] Args: [docker exec --privileged addons-747503 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0420 00:46:39.118596 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:46:39.142860 1644261 machine.go:94] provisionDockerMachine start ...
	I0420 00:46:39.142976 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:39.167812 1644261 main.go:141] libmachine: Using SSH client type: native
	I0420 00:46:39.168086 1644261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34675 <nil> <nil>}
	I0420 00:46:39.168096 1644261 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 00:46:39.349580 1644261 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-747503
	
	I0420 00:46:39.349601 1644261 ubuntu.go:169] provisioning hostname "addons-747503"
	I0420 00:46:39.349678 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:39.377796 1644261 main.go:141] libmachine: Using SSH client type: native
	I0420 00:46:39.378035 1644261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34675 <nil> <nil>}
	I0420 00:46:39.378046 1644261 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-747503 && echo "addons-747503" | sudo tee /etc/hostname
	I0420 00:46:39.558224 1644261 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-747503
	
	I0420 00:46:39.558419 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:39.575363 1644261 main.go:141] libmachine: Using SSH client type: native
	I0420 00:46:39.575600 1644261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34675 <nil> <nil>}
	I0420 00:46:39.575617 1644261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-747503' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-747503/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-747503' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 00:46:39.717750 1644261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:46:39.717780 1644261 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18703-1638187/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-1638187/.minikube}
	I0420 00:46:39.717798 1644261 ubuntu.go:177] setting up certificates
	I0420 00:46:39.717807 1644261 provision.go:84] configureAuth start
	I0420 00:46:39.717871 1644261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747503
	I0420 00:46:39.734066 1644261 provision.go:143] copyHostCerts
	I0420 00:46:39.734147 1644261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem (1082 bytes)
	I0420 00:46:39.734277 1644261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem (1123 bytes)
	I0420 00:46:39.734339 1644261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem (1675 bytes)
	I0420 00:46:39.734390 1644261 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem org=jenkins.addons-747503 san=[127.0.0.1 192.168.49.2 addons-747503 localhost minikube]
	I0420 00:46:40.231219 1644261 provision.go:177] copyRemoteCerts
	I0420 00:46:40.231290 1644261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 00:46:40.231331 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.247276 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:40.346662 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 00:46:40.371651 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0420 00:46:40.396149 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0420 00:46:40.421133 1644261 provision.go:87] duration metric: took 703.312596ms to configureAuth
	I0420 00:46:40.421162 1644261 ubuntu.go:193] setting minikube options for container-runtime
	I0420 00:46:40.421357 1644261 config.go:182] Loaded profile config "addons-747503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:46:40.421463 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.436686 1644261 main.go:141] libmachine: Using SSH client type: native
	I0420 00:46:40.436931 1644261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34675 <nil> <nil>}
	I0420 00:46:40.436947 1644261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 00:46:40.681193 1644261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 00:46:40.681219 1644261 machine.go:97] duration metric: took 1.538331373s to provisionDockerMachine
	I0420 00:46:40.681230 1644261 client.go:171] duration metric: took 9.358739082s to LocalClient.Create
	I0420 00:46:40.681274 1644261 start.go:167] duration metric: took 9.358813131s to libmachine.API.Create "addons-747503"
	I0420 00:46:40.681289 1644261 start.go:293] postStartSetup for "addons-747503" (driver="docker")
	I0420 00:46:40.681301 1644261 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 00:46:40.681386 1644261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 00:46:40.681463 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.698546 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:40.802764 1644261 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 00:46:40.805936 1644261 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0420 00:46:40.805975 1644261 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0420 00:46:40.806008 1644261 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0420 00:46:40.806022 1644261 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0420 00:46:40.806034 1644261 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-1638187/.minikube/addons for local assets ...
	I0420 00:46:40.806115 1644261 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-1638187/.minikube/files for local assets ...
	I0420 00:46:40.806144 1644261 start.go:296] duration metric: took 124.848597ms for postStartSetup
	I0420 00:46:40.806464 1644261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747503
	I0420 00:46:40.821587 1644261 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/config.json ...
	I0420 00:46:40.821882 1644261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:46:40.821936 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.835949 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:40.934325 1644261 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0420 00:46:40.938838 1644261 start.go:128] duration metric: took 9.618867781s to createHost
	I0420 00:46:40.938860 1644261 start.go:83] releasing machines lock for "addons-747503", held for 9.61901377s
	I0420 00:46:40.938948 1644261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747503
	I0420 00:46:40.954767 1644261 ssh_runner.go:195] Run: cat /version.json
	I0420 00:46:40.954809 1644261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 00:46:40.954838 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.954856 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.973750 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:40.987873 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:41.073383 1644261 ssh_runner.go:195] Run: systemctl --version
	I0420 00:46:41.192077 1644261 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 00:46:41.344667 1644261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0420 00:46:41.349255 1644261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:46:41.370360 1644261 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0420 00:46:41.370464 1644261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:46:41.403068 1644261 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0420 00:46:41.403146 1644261 start.go:494] detecting cgroup driver to use...
	I0420 00:46:41.403194 1644261 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0420 00:46:41.403271 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 00:46:41.419319 1644261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 00:46:41.431512 1644261 docker.go:217] disabling cri-docker service (if available) ...
	I0420 00:46:41.431608 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 00:46:41.446179 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 00:46:41.465996 1644261 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 00:46:41.554380 1644261 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 00:46:41.655130 1644261 docker.go:233] disabling docker service ...
	I0420 00:46:41.655197 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 00:46:41.675820 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 00:46:41.688324 1644261 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 00:46:41.772551 1644261 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 00:46:41.869236 1644261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 00:46:41.880923 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 00:46:41.897306 1644261 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 00:46:41.897393 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.908466 1644261 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 00:46:41.908556 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.919831 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.930232 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.940033 1644261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 00:46:41.949454 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.959319 1644261 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.974839 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.984469 1644261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 00:46:41.993979 1644261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 00:46:42.008022 1644261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:46:42.111879 1644261 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 00:46:42.238392 1644261 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 00:46:42.238485 1644261 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 00:46:42.242714 1644261 start.go:562] Will wait 60s for crictl version
	I0420 00:46:42.242782 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:46:42.246739 1644261 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 00:46:42.289378 1644261 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0420 00:46:42.289488 1644261 ssh_runner.go:195] Run: crio --version
	I0420 00:46:42.333568 1644261 ssh_runner.go:195] Run: crio --version
	I0420 00:46:42.377897 1644261 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0420 00:46:42.379595 1644261 cli_runner.go:164] Run: docker network inspect addons-747503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0420 00:46:42.392523 1644261 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0420 00:46:42.396287 1644261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:46:42.406719 1644261 kubeadm.go:877] updating cluster {Name:addons-747503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 00:46:42.406844 1644261 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:46:42.406909 1644261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:46:42.492542 1644261 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:46:42.492568 1644261 crio.go:433] Images already preloaded, skipping extraction
	I0420 00:46:42.492648 1644261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:46:42.532591 1644261 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:46:42.532618 1644261 cache_images.go:84] Images are preloaded, skipping loading
	I0420 00:46:42.532628 1644261 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0 crio true true} ...
	I0420 00:46:42.532741 1644261 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-747503 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 00:46:42.532824 1644261 ssh_runner.go:195] Run: crio config
	I0420 00:46:42.580609 1644261 cni.go:84] Creating CNI manager for ""
	I0420 00:46:42.580639 1644261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0420 00:46:42.580660 1644261 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 00:46:42.580718 1644261 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-747503 NodeName:addons-747503 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 00:46:42.580886 1644261 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-747503"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 00:46:42.580966 1644261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 00:46:42.590117 1644261 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 00:46:42.590190 1644261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 00:46:42.599044 1644261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0420 00:46:42.617636 1644261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 00:46:42.635779 1644261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0420 00:46:42.653757 1644261 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0420 00:46:42.657403 1644261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:46:42.668479 1644261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:46:42.748825 1644261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:46:42.762791 1644261 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503 for IP: 192.168.49.2
	I0420 00:46:42.762861 1644261 certs.go:194] generating shared ca certs ...
	I0420 00:46:42.762893 1644261 certs.go:226] acquiring lock for ca certs: {Name:mkf02d2bd3e0f29e12b7cec7c5b9a48566830288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:42.763075 1644261 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key
	I0420 00:46:42.952911 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt ...
	I0420 00:46:42.952946 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt: {Name:mk49370c70b4ffc1cbcd1227f487de3de2af3ed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:42.953182 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key ...
	I0420 00:46:42.953200 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key: {Name:mk2877a201a5ba28e426f127f32ae06fa0033f63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:42.953299 1644261 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key
	I0420 00:46:43.525747 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt ...
	I0420 00:46:43.525778 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt: {Name:mk695cd51a6cd9c3c06377fb3cd1872da426efc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:43.527292 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key ...
	I0420 00:46:43.527309 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key: {Name:mkef065e7c04a8c6100720cceafeab1ff9cb96b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:43.527942 1644261 certs.go:256] generating profile certs ...
	I0420 00:46:43.528022 1644261 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.key
	I0420 00:46:43.528041 1644261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt with IP's: []
	I0420 00:46:43.960821 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt ...
	I0420 00:46:43.960852 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: {Name:mk84a033ba366df9ffa0dfef7e831bb3e5c0f737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:43.961043 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.key ...
	I0420 00:46:43.961056 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.key: {Name:mk83bfd7e187e91bdb04631dbc1011de4d92fc28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:43.961606 1644261 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key.e2a49c09
	I0420 00:46:43.961631 1644261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt.e2a49c09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0420 00:46:44.377939 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt.e2a49c09 ...
	I0420 00:46:44.377977 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt.e2a49c09: {Name:mk0a88b731f275f786bbac6d601f7f9fda080c92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:44.378572 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key.e2a49c09 ...
	I0420 00:46:44.378591 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key.e2a49c09: {Name:mkd4e59169d95ea0e222dd2e9bcaa9e7684c6506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:44.379246 1644261 certs.go:381] copying /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt.e2a49c09 -> /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt
	I0420 00:46:44.379343 1644261 certs.go:385] copying /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key.e2a49c09 -> /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key
	I0420 00:46:44.379402 1644261 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.key
	I0420 00:46:44.379425 1644261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.crt with IP's: []
	I0420 00:46:45.155458 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.crt ...
	I0420 00:46:45.155496 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.crt: {Name:mk297ed885f196ef52980a6bcd4c4dd306202aca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:45.155722 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.key ...
	I0420 00:46:45.155739 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.key: {Name:mk1a0c4c69f4e1c4e307aafc0f32c462980fe679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:45.155970 1644261 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem (1679 bytes)
	I0420 00:46:45.156033 1644261 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem (1082 bytes)
	I0420 00:46:45.156076 1644261 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem (1123 bytes)
	I0420 00:46:45.156120 1644261 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem (1675 bytes)
	I0420 00:46:45.156827 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 00:46:45.185776 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 00:46:45.215921 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 00:46:45.246659 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0420 00:46:45.276336 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0420 00:46:45.302931 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 00:46:45.330184 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 00:46:45.355925 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 00:46:45.380042 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 00:46:45.404816 1644261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 00:46:45.422615 1644261 ssh_runner.go:195] Run: openssl version
	I0420 00:46:45.427939 1644261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 00:46:45.437580 1644261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:46:45.441275 1644261 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:46 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:46:45.441378 1644261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:46:45.448324 1644261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 00:46:45.457860 1644261 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:46:45.461194 1644261 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 00:46:45.461309 1644261 kubeadm.go:391] StartCluster: {Name:addons-747503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:46:45.461403 1644261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 00:46:45.461467 1644261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 00:46:45.503476 1644261 cri.go:89] found id: ""
	I0420 00:46:45.503547 1644261 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0420 00:46:45.512391 1644261 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 00:46:45.521198 1644261 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0420 00:46:45.521290 1644261 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 00:46:45.530277 1644261 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 00:46:45.530297 1644261 kubeadm.go:156] found existing configuration files:
	
	I0420 00:46:45.530357 1644261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 00:46:45.539187 1644261 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 00:46:45.539295 1644261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 00:46:45.547666 1644261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 00:46:45.556291 1644261 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 00:46:45.556360 1644261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 00:46:45.564678 1644261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 00:46:45.573450 1644261 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 00:46:45.573517 1644261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 00:46:45.582508 1644261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 00:46:45.591500 1644261 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 00:46:45.591577 1644261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 00:46:45.600866 1644261 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0420 00:46:45.645453 1644261 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 00:46:45.645777 1644261 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 00:46:45.683542 1644261 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0420 00:46:45.683657 1644261 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1058-aws
	I0420 00:46:45.683722 1644261 kubeadm.go:309] OS: Linux
	I0420 00:46:45.683789 1644261 kubeadm.go:309] CGROUPS_CPU: enabled
	I0420 00:46:45.683865 1644261 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0420 00:46:45.683931 1644261 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0420 00:46:45.684007 1644261 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0420 00:46:45.684075 1644261 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0420 00:46:45.684148 1644261 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0420 00:46:45.684214 1644261 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0420 00:46:45.684312 1644261 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0420 00:46:45.684387 1644261 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0420 00:46:45.759423 1644261 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 00:46:45.759626 1644261 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 00:46:45.759767 1644261 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 00:46:46.002291 1644261 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 00:46:46.007243 1644261 out.go:204]   - Generating certificates and keys ...
	I0420 00:46:46.007483 1644261 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 00:46:46.007612 1644261 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 00:46:46.884914 1644261 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0420 00:46:47.257057 1644261 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0420 00:46:47.525713 1644261 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0420 00:46:48.004926 1644261 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0420 00:46:48.760658 1644261 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0420 00:46:48.761028 1644261 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-747503 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0420 00:46:49.351744 1644261 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0420 00:46:49.352075 1644261 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-747503 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0420 00:46:50.201612 1644261 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0420 00:46:51.561008 1644261 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0420 00:46:51.893672 1644261 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0420 00:46:51.893960 1644261 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 00:46:52.391610 1644261 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 00:46:52.832785 1644261 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 00:46:53.450795 1644261 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 00:46:54.163371 1644261 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 00:46:54.525910 1644261 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 00:46:54.526499 1644261 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 00:46:54.530099 1644261 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 00:46:54.533698 1644261 out.go:204]   - Booting up control plane ...
	I0420 00:46:54.533810 1644261 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 00:46:54.533896 1644261 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 00:46:54.534330 1644261 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 00:46:54.544911 1644261 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 00:46:54.545769 1644261 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 00:46:54.545990 1644261 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 00:46:54.640341 1644261 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 00:46:54.640434 1644261 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 00:46:56.142567 1644261 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.502003018s
	I0420 00:46:56.142654 1644261 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 00:47:01.644242 1644261 kubeadm.go:309] [api-check] The API server is healthy after 5.501943699s
	I0420 00:47:01.664659 1644261 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 00:47:01.681300 1644261 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 00:47:01.708476 1644261 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 00:47:01.708705 1644261 kubeadm.go:309] [mark-control-plane] Marking the node addons-747503 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 00:47:01.721036 1644261 kubeadm.go:309] [bootstrap-token] Using token: gydxtq.1vtpvmdo173k1bfx
	I0420 00:47:01.723573 1644261 out.go:204]   - Configuring RBAC rules ...
	I0420 00:47:01.723699 1644261 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 00:47:01.728901 1644261 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 00:47:01.737904 1644261 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 00:47:01.741657 1644261 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 00:47:01.745445 1644261 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 00:47:01.750064 1644261 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 00:47:02.051404 1644261 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 00:47:02.494567 1644261 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 00:47:03.051115 1644261 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 00:47:03.052415 1644261 kubeadm.go:309] 
	I0420 00:47:03.052490 1644261 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 00:47:03.052501 1644261 kubeadm.go:309] 
	I0420 00:47:03.052583 1644261 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 00:47:03.052596 1644261 kubeadm.go:309] 
	I0420 00:47:03.052621 1644261 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 00:47:03.052682 1644261 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 00:47:03.052735 1644261 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 00:47:03.052744 1644261 kubeadm.go:309] 
	I0420 00:47:03.052796 1644261 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 00:47:03.052805 1644261 kubeadm.go:309] 
	I0420 00:47:03.052851 1644261 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 00:47:03.052861 1644261 kubeadm.go:309] 
	I0420 00:47:03.052912 1644261 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 00:47:03.052987 1644261 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 00:47:03.053062 1644261 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 00:47:03.053074 1644261 kubeadm.go:309] 
	I0420 00:47:03.053155 1644261 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 00:47:03.053232 1644261 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 00:47:03.053241 1644261 kubeadm.go:309] 
	I0420 00:47:03.053322 1644261 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token gydxtq.1vtpvmdo173k1bfx \
	I0420 00:47:03.053425 1644261 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9c904917a7f9caa355a71a4c03ca34b03d28761d5d47f15de292975c6da7288d \
	I0420 00:47:03.053449 1644261 kubeadm.go:309] 	--control-plane 
	I0420 00:47:03.053475 1644261 kubeadm.go:309] 
	I0420 00:47:03.053587 1644261 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 00:47:03.053596 1644261 kubeadm.go:309] 
	I0420 00:47:03.053675 1644261 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token gydxtq.1vtpvmdo173k1bfx \
	I0420 00:47:03.053777 1644261 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9c904917a7f9caa355a71a4c03ca34b03d28761d5d47f15de292975c6da7288d 
	I0420 00:47:03.056801 1644261 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1058-aws\n", err: exit status 1
	I0420 00:47:03.056915 1644261 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 00:47:03.056944 1644261 cni.go:84] Creating CNI manager for ""
	I0420 00:47:03.056957 1644261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0420 00:47:03.060887 1644261 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0420 00:47:03.063358 1644261 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0420 00:47:03.067117 1644261 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0420 00:47:03.067135 1644261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0420 00:47:03.086237 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0420 00:47:03.390022 1644261 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 00:47:03.390173 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:03.390313 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-747503 minikube.k8s.io/updated_at=2024_04_20T00_47_03_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=addons-747503 minikube.k8s.io/primary=true
	I0420 00:47:03.585439 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:03.585498 1644261 ops.go:34] apiserver oom_adj: -16
	I0420 00:47:04.085626 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:04.586529 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:05.086557 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:05.585699 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:06.085654 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:06.585771 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:07.085576 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:07.586155 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:08.086096 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:08.585649 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:09.086504 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:09.585610 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:10.085668 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:10.586519 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:11.086404 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:11.586239 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:12.085669 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:12.586287 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:13.086534 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:13.586536 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:14.085702 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:14.586138 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:15.085794 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:15.586433 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:16.085650 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:16.179804 1644261 kubeadm.go:1107] duration metric: took 12.78970109s to wait for elevateKubeSystemPrivileges
	W0420 00:47:16.179838 1644261 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 00:47:16.179846 1644261 kubeadm.go:393] duration metric: took 30.718541399s to StartCluster
	I0420 00:47:16.179861 1644261 settings.go:142] acquiring lock: {Name:mk38dc124731a3de0f512758a89f5557db305d6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:47:16.180388 1644261 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 00:47:16.180815 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/kubeconfig: {Name:mk33979dc7705003abaa608c8031c04a91a05c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:47:16.181428 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0420 00:47:16.181453 1644261 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:47:16.183715 1644261 out.go:177] * Verifying Kubernetes components...
	I0420 00:47:16.181718 1644261 config.go:182] Loaded profile config "addons-747503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:47:16.181730 1644261 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0420 00:47:16.185704 1644261 addons.go:69] Setting yakd=true in profile "addons-747503"
	I0420 00:47:16.185734 1644261 addons.go:234] Setting addon yakd=true in "addons-747503"
	I0420 00:47:16.185768 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.186268 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.186447 1644261 addons.go:69] Setting ingress-dns=true in profile "addons-747503"
	I0420 00:47:16.186469 1644261 addons.go:234] Setting addon ingress-dns=true in "addons-747503"
	I0420 00:47:16.186517 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.186921 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.187248 1644261 addons.go:69] Setting inspektor-gadget=true in profile "addons-747503"
	I0420 00:47:16.187275 1644261 addons.go:234] Setting addon inspektor-gadget=true in "addons-747503"
	I0420 00:47:16.187315 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.187697 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.187894 1644261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:47:16.188126 1644261 addons.go:69] Setting cloud-spanner=true in profile "addons-747503"
	I0420 00:47:16.188155 1644261 addons.go:234] Setting addon cloud-spanner=true in "addons-747503"
	I0420 00:47:16.188175 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.188546 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.191365 1644261 addons.go:69] Setting metrics-server=true in profile "addons-747503"
	I0420 00:47:16.191404 1644261 addons.go:234] Setting addon metrics-server=true in "addons-747503"
	I0420 00:47:16.191442 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.191858 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.195564 1644261 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-747503"
	I0420 00:47:16.195639 1644261 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-747503"
	I0420 00:47:16.195677 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.196140 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.196393 1644261 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-747503"
	I0420 00:47:16.196425 1644261 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-747503"
	I0420 00:47:16.196458 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.196858 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.212712 1644261 addons.go:69] Setting registry=true in profile "addons-747503"
	I0420 00:47:16.212761 1644261 addons.go:234] Setting addon registry=true in "addons-747503"
	I0420 00:47:16.212800 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.213262 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.213595 1644261 addons.go:69] Setting default-storageclass=true in profile "addons-747503"
	I0420 00:47:16.213636 1644261 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-747503"
	I0420 00:47:16.213917 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.229059 1644261 addons.go:69] Setting storage-provisioner=true in profile "addons-747503"
	I0420 00:47:16.229107 1644261 addons.go:234] Setting addon storage-provisioner=true in "addons-747503"
	I0420 00:47:16.229144 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.229674 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.238837 1644261 addons.go:69] Setting gcp-auth=true in profile "addons-747503"
	I0420 00:47:16.238899 1644261 mustload.go:65] Loading cluster: addons-747503
	I0420 00:47:16.239098 1644261 config.go:182] Loaded profile config "addons-747503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:47:16.239349 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.247529 1644261 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-747503"
	I0420 00:47:16.247581 1644261 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-747503"
	I0420 00:47:16.247904 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.260923 1644261 addons.go:69] Setting ingress=true in profile "addons-747503"
	I0420 00:47:16.261022 1644261 addons.go:234] Setting addon ingress=true in "addons-747503"
	I0420 00:47:16.261118 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.261627 1644261 addons.go:69] Setting volumesnapshots=true in profile "addons-747503"
	I0420 00:47:16.261657 1644261 addons.go:234] Setting addon volumesnapshots=true in "addons-747503"
	I0420 00:47:16.261681 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.262076 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.269352 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.380646 1644261 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0420 00:47:16.389090 1644261 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0420 00:47:16.389162 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0420 00:47:16.389257 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.396585 1644261 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0420 00:47:16.413756 1644261 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0420 00:47:16.415782 1644261 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 00:47:16.415807 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 00:47:16.415883 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.413893 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0420 00:47:16.401466 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.403036 1644261 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-747503"
	I0420 00:47:16.418273 1644261 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 00:47:16.418280 1644261 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0420 00:47:16.418293 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0420 00:47:16.420636 1644261 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0420 00:47:16.420642 1644261 out.go:177]   - Using image docker.io/registry:2.8.3
	I0420 00:47:16.420647 1644261 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0420 00:47:16.425004 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0420 00:47:16.425041 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.426776 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.426994 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.441701 1644261 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0420 00:47:16.441728 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0420 00:47:16.441815 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.442302 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0420 00:47:16.445183 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0420 00:47:16.442535 1644261 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0420 00:47:16.443701 1644261 addons.go:234] Setting addon default-storageclass=true in "addons-747503"
	I0420 00:47:16.448904 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0420 00:47:16.448990 1644261 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 00:47:16.449022 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.451018 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.451164 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0420 00:47:16.453010 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0420 00:47:16.451430 1644261 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0420 00:47:16.451490 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0420 00:47:16.451506 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0420 00:47:16.451526 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 00:47:16.455094 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0420 00:47:16.456879 1644261 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0420 00:47:16.456900 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0420 00:47:16.456978 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.458542 1644261 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0420 00:47:16.458614 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.474496 1644261 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0420 00:47:16.474512 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0420 00:47:16.474577 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.477483 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.461983 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0420 00:47:16.500835 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0420 00:47:16.502674 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0420 00:47:16.504324 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0420 00:47:16.506235 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0420 00:47:16.506258 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0420 00:47:16.506328 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.517701 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.462052 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.601782 1644261 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0420 00:47:16.609614 1644261 out.go:177]   - Using image docker.io/busybox:stable
	I0420 00:47:16.605998 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.601684 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.609993 1644261 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0420 00:47:16.611424 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0420 00:47:16.611499 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.620520 1644261 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0420 00:47:16.617584 1644261 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 00:47:16.624839 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 00:47:16.624942 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.638699 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.639612 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.641927 1644261 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0420 00:47:16.641980 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0420 00:47:16.642067 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.668331 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0420 00:47:16.668981 1644261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:47:16.669308 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.673621 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.676969 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.677828 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.699192 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.730795 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.732606 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.742311 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.780716 1644261 node_ready.go:35] waiting up to 6m0s for node "addons-747503" to be "Ready" ...
	I0420 00:47:16.921085 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0420 00:47:16.921116 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0420 00:47:16.991932 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0420 00:47:16.996333 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0420 00:47:17.100495 1644261 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 00:47:17.100519 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0420 00:47:17.112923 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0420 00:47:17.112950 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0420 00:47:17.120377 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0420 00:47:17.120403 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0420 00:47:17.189815 1644261 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0420 00:47:17.189844 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0420 00:47:17.207623 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0420 00:47:17.210877 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0420 00:47:17.219064 1644261 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0420 00:47:17.219098 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0420 00:47:17.227561 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0420 00:47:17.227588 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0420 00:47:17.268204 1644261 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 00:47:17.268232 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 00:47:17.272800 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0420 00:47:17.272833 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0420 00:47:17.275286 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 00:47:17.303754 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 00:47:17.334811 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0420 00:47:17.334879 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0420 00:47:17.341610 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0420 00:47:17.394607 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0420 00:47:17.394678 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0420 00:47:17.407225 1644261 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0420 00:47:17.407292 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0420 00:47:17.411478 1644261 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0420 00:47:17.411505 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0420 00:47:17.412384 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0420 00:47:17.412410 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0420 00:47:17.459411 1644261 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 00:47:17.459482 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 00:47:17.520995 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0420 00:47:17.521029 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0420 00:47:17.562920 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0420 00:47:17.570793 1644261 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0420 00:47:17.570820 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0420 00:47:17.628250 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0420 00:47:17.628287 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0420 00:47:17.635304 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 00:47:17.675653 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0420 00:47:17.675685 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0420 00:47:17.677655 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0420 00:47:17.692234 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0420 00:47:17.692263 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0420 00:47:17.785863 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0420 00:47:17.785891 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0420 00:47:17.790678 1644261 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0420 00:47:17.790712 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0420 00:47:17.836414 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0420 00:47:17.836445 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0420 00:47:17.897210 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0420 00:47:17.956384 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0420 00:47:17.956418 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0420 00:47:17.972928 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0420 00:47:17.972958 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0420 00:47:18.058087 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0420 00:47:18.058115 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0420 00:47:18.122175 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0420 00:47:18.122210 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0420 00:47:18.196857 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0420 00:47:18.196898 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0420 00:47:18.223649 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0420 00:47:18.223683 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0420 00:47:18.308893 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0420 00:47:18.308918 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0420 00:47:18.317127 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0420 00:47:18.431462 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0420 00:47:18.431507 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0420 00:47:18.590897 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0420 00:47:18.976068 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:20.096220 1644261 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.427813593s)
	I0420 00:47:20.096422 1644261 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0420 00:47:20.703392 1644261 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-747503" context rescaled to 1 replicas
	I0420 00:47:21.021961 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.02998807s)
	I0420 00:47:21.316695 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:22.188682 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.192309833s)
	I0420 00:47:22.188774 1644261 addons.go:470] Verifying addon ingress=true in "addons-747503"
	I0420 00:47:22.192225 1644261 out.go:177] * Verifying ingress addon...
	I0420 00:47:22.188943 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.981293644s)
	I0420 00:47:22.189126 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.978079286s)
	I0420 00:47:22.189179 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.91383171s)
	I0420 00:47:22.189212 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.88538872s)
	I0420 00:47:22.189275 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.847605776s)
	I0420 00:47:22.189361 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.554030452s)
	I0420 00:47:22.189389 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.51171099s)
	I0420 00:47:22.189421 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.626351674s)
	I0420 00:47:22.192770 1644261 addons.go:470] Verifying addon metrics-server=true in "addons-747503"
	I0420 00:47:22.192870 1644261 addons.go:470] Verifying addon registry=true in "addons-747503"
	I0420 00:47:22.196209 1644261 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0420 00:47:22.197884 1644261 out.go:177] * Verifying registry addon...
	I0420 00:47:22.200704 1644261 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0420 00:47:22.197983 1644261 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-747503 service yakd-dashboard -n yakd-dashboard
	
	I0420 00:47:22.211783 1644261 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0420 00:47:22.211886 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:22.214456 1644261 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0420 00:47:22.214527 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0420 00:47:22.238344 1644261 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0420 00:47:22.380821 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.063648234s)
	I0420 00:47:22.381083 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.483841163s)
	W0420 00:47:22.381139 1644261 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0420 00:47:22.381175 1644261 retry.go:31] will retry after 166.820915ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0420 00:47:22.548877 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0420 00:47:22.600574 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.009625669s)
	I0420 00:47:22.600671 1644261 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-747503"
	I0420 00:47:22.603229 1644261 out.go:177] * Verifying csi-hostpath-driver addon...
	I0420 00:47:22.606034 1644261 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0420 00:47:22.670601 1644261 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0420 00:47:22.670671 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:22.720534 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:22.725459 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:23.175823 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:23.228069 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:23.229636 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:23.610334 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:23.701866 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:23.705088 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:23.785324 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:24.112035 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:24.205349 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:24.210857 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:24.611934 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:24.703020 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:24.705704 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:25.111349 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:25.203668 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:25.206311 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:25.544549 1644261 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0420 00:47:25.544657 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:25.569718 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:25.611543 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:25.707373 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:25.711627 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:25.751589 1644261 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0420 00:47:25.791122 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:25.816447 1644261 addons.go:234] Setting addon gcp-auth=true in "addons-747503"
	I0420 00:47:25.816499 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:25.816957 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:25.845517 1644261 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0420 00:47:25.845586 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:25.877309 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.328324528s)
	I0420 00:47:25.877697 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:25.975676 1644261 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0420 00:47:25.978070 1644261 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0420 00:47:25.980578 1644261 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0420 00:47:25.980605 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0420 00:47:25.999059 1644261 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0420 00:47:25.999087 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0420 00:47:26.023351 1644261 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0420 00:47:26.023374 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0420 00:47:26.045389 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0420 00:47:26.110669 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:26.202148 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:26.205465 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:26.611994 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:26.731236 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:26.732300 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:26.789639 1644261 addons.go:470] Verifying addon gcp-auth=true in "addons-747503"
	I0420 00:47:26.792415 1644261 out.go:177] * Verifying gcp-auth addon...
	I0420 00:47:26.795034 1644261 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0420 00:47:26.801627 1644261 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0420 00:47:26.801647 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:27.111220 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:27.202244 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:27.206342 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:27.299355 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:27.611552 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:27.704193 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:27.706819 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:27.800094 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:28.112242 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:28.203543 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:28.205771 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:28.285015 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:28.299535 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:28.610656 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:28.701793 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:28.705035 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:28.799716 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:29.110925 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:29.202332 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:29.205716 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:29.298341 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:29.610673 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:29.701621 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:29.706306 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:29.798919 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:30.112640 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:30.204537 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:30.206706 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:30.299144 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:30.610522 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:30.702150 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:30.704677 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:30.784363 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:30.799016 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:31.110019 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:31.202111 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:31.205063 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:31.299204 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:31.611514 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:31.701803 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:31.704838 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:31.798703 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:32.111116 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:32.202000 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:32.204400 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:32.298928 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:32.611045 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:32.702323 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:32.705822 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:32.789058 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:32.798725 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:33.110424 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:33.204110 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:33.205044 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:33.298391 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:33.610022 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:33.701936 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:33.705447 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:33.798469 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:34.111023 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:34.201972 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:34.204989 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:34.298734 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:34.611132 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:34.702210 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:34.704568 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:34.798686 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:35.114336 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:35.201894 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:35.204405 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:35.284372 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:35.299112 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:35.610506 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:35.703095 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:35.704656 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:35.798675 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:36.111001 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:36.202000 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:36.205085 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:36.298568 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:36.610614 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:36.701867 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:36.705335 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:36.798308 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:37.110714 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:37.201367 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:37.205486 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:37.298446 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:37.610698 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:37.701750 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:37.703884 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:37.784367 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:37.798325 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:38.110748 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:38.201466 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:38.204655 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:38.298691 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:38.610939 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:38.702667 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:38.706391 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:38.798284 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:39.110263 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:39.202152 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:39.205507 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:39.298283 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:39.610642 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:39.701439 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:39.704765 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:39.784473 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:39.798580 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:40.111665 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:40.201902 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:40.204092 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:40.298312 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:40.611241 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:40.701932 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:40.704592 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:40.798190 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:41.110629 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:41.202188 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:41.204072 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:41.298945 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:41.611166 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:41.702282 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:41.704810 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:41.784563 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:41.798789 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:42.110754 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:42.202995 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:42.205235 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:42.299222 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:42.611146 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:42.702723 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:42.705024 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:42.798378 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:43.110641 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:43.202122 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:43.204551 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:43.299823 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:43.610371 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:43.702371 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:43.705119 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:43.798537 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:44.110802 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:44.201908 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:44.204050 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:44.283628 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:44.299022 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:44.611069 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:44.702139 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:44.704447 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:44.798400 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:45.110913 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:45.203669 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:45.207125 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:45.299652 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:45.611596 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:45.702314 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:45.704145 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:45.798613 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:46.111305 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:46.202398 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:46.207349 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:46.284676 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:46.299304 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:46.610152 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:46.701326 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:46.704270 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:46.798790 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:47.110635 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:47.201792 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:47.203647 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:47.298677 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:47.610877 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:47.701753 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:47.704933 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:47.798847 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:48.111425 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:48.201251 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:48.204521 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:48.298577 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:48.615418 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:48.703331 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:48.707352 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:48.784673 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:48.800104 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:49.112682 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:49.201597 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:49.204732 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:49.298628 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:49.610976 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:49.701739 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:49.705321 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:49.798577 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:50.111739 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:50.201941 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:50.203737 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:50.299042 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:50.610954 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:50.709155 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:50.709916 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:50.799587 1644261 node_ready.go:49] node "addons-747503" has status "Ready":"True"
	I0420 00:47:50.799614 1644261 node_ready.go:38] duration metric: took 34.018855397s for node "addons-747503" to be "Ready" ...
	I0420 00:47:50.799624 1644261 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:47:50.839199 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:50.842280 1644261 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pj8wd" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.121354 1644261 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0420 00:47:51.121385 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:51.265128 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:51.306316 1644261 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0420 00:47:51.306343 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:51.316679 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:51.646936 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:51.738236 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:51.738864 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:51.825253 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:51.869885 1644261 pod_ready.go:92] pod "coredns-7db6d8ff4d-pj8wd" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.869905 1644261 pod_ready.go:81] duration metric: took 1.02759912s for pod "coredns-7db6d8ff4d-pj8wd" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.869936 1644261 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.880443 1644261 pod_ready.go:92] pod "etcd-addons-747503" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.880468 1644261 pod_ready.go:81] duration metric: took 10.523706ms for pod "etcd-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.880483 1644261 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.893210 1644261 pod_ready.go:92] pod "kube-apiserver-addons-747503" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.893237 1644261 pod_ready.go:81] duration metric: took 12.745711ms for pod "kube-apiserver-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.893253 1644261 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.902837 1644261 pod_ready.go:92] pod "kube-controller-manager-addons-747503" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.902861 1644261 pod_ready.go:81] duration metric: took 9.600699ms for pod "kube-controller-manager-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.902876 1644261 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cmk9r" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.984300 1644261 pod_ready.go:92] pod "kube-proxy-cmk9r" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.984328 1644261 pod_ready.go:81] duration metric: took 81.441699ms for pod "kube-proxy-cmk9r" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.984340 1644261 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:52.112853 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:52.203480 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:52.206627 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:52.298995 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:52.385428 1644261 pod_ready.go:92] pod "kube-scheduler-addons-747503" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:52.385502 1644261 pod_ready.go:81] duration metric: took 401.135821ms for pod "kube-scheduler-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:52.385569 1644261 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:52.612694 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:52.702190 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:52.747764 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:52.816322 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:53.112453 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:53.204654 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:53.207621 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:53.300011 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:53.611628 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:53.705044 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:53.707972 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:53.798494 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:54.114108 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:54.205753 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:54.208644 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:54.299729 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:54.393624 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:47:54.614619 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:54.705416 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:54.734978 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:54.802347 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:55.114619 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:55.207471 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:55.207745 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:55.298943 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:55.613443 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:55.703721 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:55.711000 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:55.800030 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:56.112264 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:56.202387 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:56.205588 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:56.298472 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:56.611287 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:56.702954 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:56.706206 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:56.806862 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:56.892905 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:47:57.111783 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:57.202483 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:57.206930 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:57.298660 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:57.614099 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:57.715434 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:57.716689 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:57.799161 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:58.111940 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:58.202592 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:58.205903 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:58.299398 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:58.614025 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:58.703746 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:58.710610 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:58.800586 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:59.113393 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:59.210841 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:59.213028 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:59.300372 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:59.395893 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:47:59.624560 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:59.702174 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:59.706175 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:59.798856 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:00.144163 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:00.225477 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:00.229320 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:48:00.331039 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:00.612190 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:00.703620 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:00.707231 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:48:00.800642 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:01.114451 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:01.205638 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:01.213681 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:48:01.299675 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:01.612691 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:01.703427 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:01.705034 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:48:01.798680 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:01.892315 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:02.114119 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:02.204802 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:02.225935 1644261 kapi.go:107] duration metric: took 40.025228032s to wait for kubernetes.io/minikube-addons=registry ...
	I0420 00:48:02.312863 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:02.627694 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:02.703161 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:02.798728 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:03.113524 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:03.202842 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:03.299523 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:03.613428 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:03.703775 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:03.800788 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:03.895136 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:04.113370 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:04.202943 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:04.299410 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:04.613215 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:04.702731 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:04.799550 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:05.113042 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:05.202585 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:05.299047 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:05.614002 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:05.702680 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:05.802558 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:05.895296 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:06.114675 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:06.204549 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:06.302473 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:06.614112 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:06.703570 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:06.799260 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:07.113316 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:07.202565 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:07.298863 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:07.612018 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:07.703264 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:07.798648 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:08.112270 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:08.203042 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:08.299153 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:08.393303 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:08.613346 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:08.702765 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:08.799658 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:09.112127 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:09.203847 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:09.299447 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:09.614216 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:09.703657 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:09.800601 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:10.118707 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:10.202184 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:10.298516 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:10.393387 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:10.613200 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:10.703138 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:10.800550 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:11.137314 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:11.202576 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:11.299140 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:11.613141 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:11.703548 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:11.805116 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:12.111561 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:12.202457 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:12.299341 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:12.612160 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:12.702500 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:12.799184 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:12.892518 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:13.111952 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:13.202640 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:13.299417 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:13.612612 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:13.703214 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:13.799562 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:14.112092 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:14.202789 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:14.298940 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:14.612071 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:14.701930 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:14.798636 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:15.112017 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:15.202850 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:15.300071 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:15.396380 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:15.642371 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:15.703162 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:15.799216 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:16.113062 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:16.202577 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:16.298955 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:16.612867 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:16.702379 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:16.798754 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:17.111096 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:17.202268 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:17.298651 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:17.611502 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:17.702326 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:17.799117 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:17.891723 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:18.112154 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:18.203753 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:18.298844 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:18.611817 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:18.701804 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:18.799969 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:19.112834 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:19.202549 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:19.299356 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:19.612472 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:19.702362 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:19.800164 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:19.894073 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:20.112337 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:20.205154 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:20.299624 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:20.612270 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:20.702951 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:20.798601 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:21.111917 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:21.202373 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:21.299686 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:21.612723 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:21.702148 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:21.798629 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:22.112515 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:22.203367 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:22.302148 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:22.392640 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:22.612660 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:22.702810 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:22.798786 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:23.111919 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:23.201812 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:23.299108 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:23.611637 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:23.701775 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:23.799304 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:24.131787 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:24.255684 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:24.316635 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:24.418028 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:24.617598 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:24.702006 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:24.799016 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:25.112839 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:25.201904 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:25.298574 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:25.611863 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:25.702151 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:25.798626 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:26.112621 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:26.202033 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:26.298771 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:26.621173 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:26.704880 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:26.799455 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:26.892011 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:27.111595 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:27.202612 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:27.298946 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:27.612741 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:27.703062 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:27.799262 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:28.111989 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:28.205133 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:28.300116 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:28.617767 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:28.702926 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:28.798906 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:28.895433 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:29.115635 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:29.202330 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:29.305517 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:29.612221 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:29.715115 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:29.800816 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:30.113986 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:30.204176 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:30.300436 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:30.611466 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:30.702492 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:30.799210 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:31.132012 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:31.204558 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:31.299795 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:31.394405 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:31.611911 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:31.702249 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:31.798674 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:32.115252 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:32.204156 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:32.298833 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:32.612034 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:32.702349 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:32.798789 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:33.112130 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:33.202582 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:33.299170 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:33.612774 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:33.702834 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:33.800234 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:33.892097 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:34.112008 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:34.202297 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:34.298717 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:34.619328 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:34.703435 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:34.806019 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:35.120414 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:35.205464 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:35.300355 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:35.613650 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:35.703059 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:35.799691 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:35.894200 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:36.113606 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:36.202374 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:36.299444 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:36.612786 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:36.702747 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:36.799741 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:37.112124 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:37.202762 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:37.301973 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:37.612106 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:37.702390 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:37.820773 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:37.896709 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:38.112525 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:38.203055 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:38.298160 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:38.614559 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:38.702186 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:38.798806 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:39.113206 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:39.203142 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:39.302909 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:39.621741 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:39.702067 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:39.799389 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:40.113336 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:40.203042 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:40.298723 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:40.395135 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:40.612345 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:40.702488 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:40.799104 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:41.122104 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:41.202448 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:41.300486 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:41.612243 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:41.703549 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:41.799237 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:42.111985 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:42.203111 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:42.302465 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:42.612639 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:42.703714 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:42.799406 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:42.892695 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:43.112179 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:43.203272 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:43.298925 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:43.612258 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:43.702705 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:43.799390 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:44.115774 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:44.202051 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:44.298314 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:44.611987 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:44.702124 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:44.798541 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:45.112791 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:45.204493 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:45.299729 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:45.393729 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:45.612789 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:45.702486 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:45.799560 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:46.119540 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:46.202732 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:46.299293 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:46.611893 1644261 kapi.go:107] duration metric: took 1m24.005858121s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0420 00:48:46.702042 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:46.798447 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:47.202393 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:47.298773 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:47.701765 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:47.799351 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:47.892278 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:48.202626 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:48.299390 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:48.702100 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:48.799332 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:49.201889 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:49.299047 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:49.702415 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:49.799051 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:49.892790 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:50.202697 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:50.299292 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:50.702478 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:50.798784 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:51.202229 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:51.298707 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:51.703133 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:51.798709 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:52.202174 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:52.298480 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:52.391893 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:52.702258 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:52.798434 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:53.202557 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:53.298973 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:53.702469 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:53.798795 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:54.201914 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:54.299019 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:54.392210 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:54.702208 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:54.798675 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:55.201889 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:55.299247 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:55.703349 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:55.798800 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:56.201804 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:56.299211 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:56.398126 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:56.703544 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:56.801082 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:57.203554 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:57.300723 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:57.701744 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:57.799038 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:58.202294 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:58.298796 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:58.408975 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:58.703585 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:58.800895 1644261 kapi.go:107] duration metric: took 1m32.005859357s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0420 00:48:58.803430 1644261 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-747503 cluster.
	I0420 00:48:58.805977 1644261 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0420 00:48:58.808774 1644261 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0420 00:48:59.214348 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:59.702282 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:00.204506 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:00.703046 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:00.895496 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:49:01.210517 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:01.401710 1644261 pod_ready.go:92] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"True"
	I0420 00:49:01.401740 1644261 pod_ready.go:81] duration metric: took 1m9.016144355s for pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace to be "Ready" ...
	I0420 00:49:01.401759 1644261 pod_ready.go:38] duration metric: took 1m10.602112322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:49:01.401774 1644261 api_server.go:52] waiting for apiserver process to appear ...
	I0420 00:49:01.401809 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:49:01.401878 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:49:01.476056 1644261 cri.go:89] found id: "d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:01.476082 1644261 cri.go:89] found id: ""
	I0420 00:49:01.476091 1644261 logs.go:276] 1 containers: [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053]
	I0420 00:49:01.476157 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.482452 1644261 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:49:01.482549 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:49:01.545143 1644261 cri.go:89] found id: "dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:01.545169 1644261 cri.go:89] found id: ""
	I0420 00:49:01.545179 1644261 logs.go:276] 1 containers: [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330]
	I0420 00:49:01.545245 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.550669 1644261 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:49:01.550748 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:49:01.613640 1644261 cri.go:89] found id: "dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:01.613666 1644261 cri.go:89] found id: ""
	I0420 00:49:01.613678 1644261 logs.go:276] 1 containers: [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df]
	I0420 00:49:01.613749 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.619858 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:49:01.619944 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:49:01.677562 1644261 cri.go:89] found id: "efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:01.677589 1644261 cri.go:89] found id: ""
	I0420 00:49:01.677600 1644261 logs.go:276] 1 containers: [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929]
	I0420 00:49:01.677672 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.682732 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:49:01.682885 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:49:01.704238 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:01.772321 1644261 cri.go:89] found id: "8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:01.772392 1644261 cri.go:89] found id: ""
	I0420 00:49:01.772427 1644261 logs.go:276] 1 containers: [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20]
	I0420 00:49:01.772523 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.776830 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:49:01.776962 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:49:01.856325 1644261 cri.go:89] found id: "120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:01.856401 1644261 cri.go:89] found id: ""
	I0420 00:49:01.856433 1644261 logs.go:276] 1 containers: [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e]
	I0420 00:49:01.856549 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.861620 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:49:01.861776 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:49:01.928733 1644261 cri.go:89] found id: "b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:01.928808 1644261 cri.go:89] found id: ""
	I0420 00:49:01.928845 1644261 logs.go:276] 1 containers: [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52]
	I0420 00:49:01.928943 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.933261 1644261 logs.go:123] Gathering logs for dmesg ...
	I0420 00:49:01.933340 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:49:01.955010 1644261 logs.go:123] Gathering logs for kube-apiserver [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053] ...
	I0420 00:49:01.955091 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:02.037301 1644261 logs.go:123] Gathering logs for etcd [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330] ...
	I0420 00:49:02.037382 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:02.098944 1644261 logs.go:123] Gathering logs for kube-controller-manager [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e] ...
	I0420 00:49:02.098977 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:02.203698 1644261 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:49:02.203731 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:49:02.208871 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:02.335611 1644261 logs.go:123] Gathering logs for kubelet ...
	I0420 00:49:02.335713 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0420 00:49:02.411512 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.801309    1518 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.411792 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.801356    1518 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.412589 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815347    1518 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.412756 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815367    1518 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.413022 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.413229 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.413874 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820274    1518 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414080 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820315    1518 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414271 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820622    1518 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414479 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820646    1518 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414667 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.821047    1518 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414879 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.415678 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.416354 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.416560 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.416767 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:02.481101 1644261 logs.go:123] Gathering logs for coredns [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df] ...
	I0420 00:49:02.481147 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:02.545331 1644261 logs.go:123] Gathering logs for kube-scheduler [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929] ...
	I0420 00:49:02.545360 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:02.653396 1644261 logs.go:123] Gathering logs for kube-proxy [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20] ...
	I0420 00:49:02.653434 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:02.703574 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:02.718396 1644261 logs.go:123] Gathering logs for kindnet [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52] ...
	I0420 00:49:02.718473 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:02.784614 1644261 logs.go:123] Gathering logs for container status ...
	I0420 00:49:02.784642 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:49:02.862815 1644261 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:49:02.862918 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:49:03.154905 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:03.154978 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0420 00:49:03.155071 1644261 out.go:239] X Problems detected in kubelet:
	W0420 00:49:03.155116 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:03.155296 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:03.155332 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:03.155379 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:03.155413 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:03.155457 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:03.155482 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:49:03.203035 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:03.702362 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:04.203239 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:04.711009 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:05.203956 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:05.702494 1644261 kapi.go:107] duration metric: took 1m43.506280483s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0420 00:49:05.705025 1644261 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0420 00:49:05.707241 1644261 addons.go:505] duration metric: took 1m49.525505308s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0420 00:49:13.156219 1644261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:49:13.170608 1644261 api_server.go:72] duration metric: took 1m56.989122484s to wait for apiserver process to appear ...
	I0420 00:49:13.170636 1644261 api_server.go:88] waiting for apiserver healthz status ...
	I0420 00:49:13.170677 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:49:13.170743 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:49:13.215140 1644261 cri.go:89] found id: "d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:13.215162 1644261 cri.go:89] found id: ""
	I0420 00:49:13.215171 1644261 logs.go:276] 1 containers: [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053]
	I0420 00:49:13.215236 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.218892 1644261 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:49:13.218971 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:49:13.263654 1644261 cri.go:89] found id: "dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:13.263682 1644261 cri.go:89] found id: ""
	I0420 00:49:13.263691 1644261 logs.go:276] 1 containers: [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330]
	I0420 00:49:13.263764 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.267679 1644261 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:49:13.267768 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:49:13.309684 1644261 cri.go:89] found id: "dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:13.309708 1644261 cri.go:89] found id: ""
	I0420 00:49:13.309720 1644261 logs.go:276] 1 containers: [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df]
	I0420 00:49:13.309776 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.313423 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:49:13.313507 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:49:13.351369 1644261 cri.go:89] found id: "efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:13.351394 1644261 cri.go:89] found id: ""
	I0420 00:49:13.351403 1644261 logs.go:276] 1 containers: [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929]
	I0420 00:49:13.351459 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.358220 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:49:13.358301 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:49:13.402876 1644261 cri.go:89] found id: "8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:13.402901 1644261 cri.go:89] found id: ""
	I0420 00:49:13.402909 1644261 logs.go:276] 1 containers: [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20]
	I0420 00:49:13.402967 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.406557 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:49:13.406631 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:49:13.446459 1644261 cri.go:89] found id: "120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:13.446528 1644261 cri.go:89] found id: ""
	I0420 00:49:13.446542 1644261 logs.go:276] 1 containers: [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e]
	I0420 00:49:13.446602 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.450261 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:49:13.450351 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:49:13.490186 1644261 cri.go:89] found id: "b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:13.490224 1644261 cri.go:89] found id: ""
	I0420 00:49:13.490234 1644261 logs.go:276] 1 containers: [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52]
	I0420 00:49:13.490331 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.493880 1644261 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:49:13.493909 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:49:13.625695 1644261 logs.go:123] Gathering logs for kube-apiserver [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053] ...
	I0420 00:49:13.625770 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:13.692424 1644261 logs.go:123] Gathering logs for coredns [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df] ...
	I0420 00:49:13.692460 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:13.739447 1644261 logs.go:123] Gathering logs for kube-scheduler [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929] ...
	I0420 00:49:13.739479 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:13.783910 1644261 logs.go:123] Gathering logs for container status ...
	I0420 00:49:13.783946 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:49:13.846042 1644261 logs.go:123] Gathering logs for kubelet ...
	I0420 00:49:13.846079 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0420 00:49:13.886398 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.801309    1518 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.886620 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.801356    1518 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.887405 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815347    1518 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.887575 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815367    1518 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.887758 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.887960 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.888582 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820274    1518 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.888784 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820315    1518 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.888970 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820622    1518 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.889177 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820646    1518 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.889365 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.821047    1518 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.889580 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.890396 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.890599 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.890791 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.890996 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:13.938460 1644261 logs.go:123] Gathering logs for dmesg ...
	I0420 00:49:13.938494 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:49:13.965511 1644261 logs.go:123] Gathering logs for kube-controller-manager [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e] ...
	I0420 00:49:13.965647 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:14.058549 1644261 logs.go:123] Gathering logs for kindnet [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52] ...
	I0420 00:49:14.058589 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:14.107649 1644261 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:49:14.107678 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:49:14.200718 1644261 logs.go:123] Gathering logs for etcd [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330] ...
	I0420 00:49:14.200757 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:14.253776 1644261 logs.go:123] Gathering logs for kube-proxy [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20] ...
	I0420 00:49:14.253816 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:14.295736 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:14.295762 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0420 00:49:14.295814 1644261 out.go:239] X Problems detected in kubelet:
	W0420 00:49:14.295828 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:14.295836 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:14.295844 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:14.295852 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:14.295857 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:14.295871 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:14.295877 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:49:24.297174 1644261 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 00:49:24.304890 1644261 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0420 00:49:24.305901 1644261 api_server.go:141] control plane version: v1.30.0
	I0420 00:49:24.305926 1644261 api_server.go:131] duration metric: took 11.135283023s to wait for apiserver health ...
	I0420 00:49:24.305935 1644261 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 00:49:24.305957 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:49:24.306023 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:49:24.342719 1644261 cri.go:89] found id: "d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:24.342741 1644261 cri.go:89] found id: ""
	I0420 00:49:24.342749 1644261 logs.go:276] 1 containers: [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053]
	I0420 00:49:24.342812 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.346322 1644261 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:49:24.346394 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:49:24.390679 1644261 cri.go:89] found id: "dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:24.390702 1644261 cri.go:89] found id: ""
	I0420 00:49:24.390710 1644261 logs.go:276] 1 containers: [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330]
	I0420 00:49:24.390791 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.394567 1644261 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:49:24.394662 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:49:24.442284 1644261 cri.go:89] found id: "dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:24.442307 1644261 cri.go:89] found id: ""
	I0420 00:49:24.442315 1644261 logs.go:276] 1 containers: [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df]
	I0420 00:49:24.442382 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.446024 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:49:24.446108 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:49:24.484224 1644261 cri.go:89] found id: "efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:24.484248 1644261 cri.go:89] found id: ""
	I0420 00:49:24.484260 1644261 logs.go:276] 1 containers: [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929]
	I0420 00:49:24.484317 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.488065 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:49:24.488140 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:49:24.561054 1644261 cri.go:89] found id: "8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:24.561075 1644261 cri.go:89] found id: ""
	I0420 00:49:24.561085 1644261 logs.go:276] 1 containers: [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20]
	I0420 00:49:24.561141 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.564741 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:49:24.564860 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:49:24.605384 1644261 cri.go:89] found id: "120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:24.605444 1644261 cri.go:89] found id: ""
	I0420 00:49:24.605466 1644261 logs.go:276] 1 containers: [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e]
	I0420 00:49:24.605568 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.609475 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:49:24.610101 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:49:24.647409 1644261 cri.go:89] found id: "b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:24.647432 1644261 cri.go:89] found id: ""
	I0420 00:49:24.647441 1644261 logs.go:276] 1 containers: [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52]
	I0420 00:49:24.647516 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.650908 1644261 logs.go:123] Gathering logs for kubelet ...
	I0420 00:49:24.650933 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0420 00:49:24.687053 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.801309    1518 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.687296 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.801356    1518 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.688077 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815347    1518 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.688245 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815367    1518 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.688430 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.688630 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.689257 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820274    1518 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.689459 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820315    1518 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.689656 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820622    1518 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.689866 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820646    1518 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.690051 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.821047    1518 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.690258 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.691080 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.691285 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.691472 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.691679 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:24.740157 1644261 logs.go:123] Gathering logs for dmesg ...
	I0420 00:49:24.740187 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:49:24.760602 1644261 logs.go:123] Gathering logs for kube-apiserver [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053] ...
	I0420 00:49:24.760632 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:24.828968 1644261 logs.go:123] Gathering logs for etcd [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330] ...
	I0420 00:49:24.829007 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:24.876633 1644261 logs.go:123] Gathering logs for kindnet [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52] ...
	I0420 00:49:24.876671 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:24.922399 1644261 logs.go:123] Gathering logs for container status ...
	I0420 00:49:24.922431 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:49:24.969473 1644261 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:49:24.969505 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:49:25.149062 1644261 logs.go:123] Gathering logs for coredns [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df] ...
	I0420 00:49:25.149098 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:25.194458 1644261 logs.go:123] Gathering logs for kube-scheduler [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929] ...
	I0420 00:49:25.194489 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:25.247513 1644261 logs.go:123] Gathering logs for kube-proxy [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20] ...
	I0420 00:49:25.247547 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:25.283929 1644261 logs.go:123] Gathering logs for kube-controller-manager [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e] ...
	I0420 00:49:25.283956 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:25.350599 1644261 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:49:25.350633 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:49:25.466073 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:25.466105 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0420 00:49:25.466176 1644261 out.go:239] X Problems detected in kubelet:
	W0420 00:49:25.466192 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:25.466205 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:25.466234 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:25.466254 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:25.466269 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:25.466276 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:25.466287 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:49:35.482334 1644261 system_pods.go:59] 18 kube-system pods found
	I0420 00:49:35.482377 1644261 system_pods.go:61] "coredns-7db6d8ff4d-pj8wd" [ce9c9144-65d1-45f2-a6e0-65ac4c220237] Running
	I0420 00:49:35.482384 1644261 system_pods.go:61] "csi-hostpath-attacher-0" [1407d955-83ec-4b1d-ac07-d55e593f975f] Running
	I0420 00:49:35.482389 1644261 system_pods.go:61] "csi-hostpath-resizer-0" [023884e7-abc6-4359-95ba-ee8031b2db76] Running
	I0420 00:49:35.482394 1644261 system_pods.go:61] "csi-hostpathplugin-z7j5n" [b938be04-8aac-427e-a62d-e0d6ecea4fe9] Running
	I0420 00:49:35.482399 1644261 system_pods.go:61] "etcd-addons-747503" [707cce58-27c7-483a-9f12-80d354c6e443] Running
	I0420 00:49:35.482402 1644261 system_pods.go:61] "kindnet-x7szp" [910dbd2a-9863-4585-8a5d-98c1bb4817e2] Running
	I0420 00:49:35.482407 1644261 system_pods.go:61] "kube-apiserver-addons-747503" [81db4265-6e75-41b4-85b6-c7e09e1979a7] Running
	I0420 00:49:35.482411 1644261 system_pods.go:61] "kube-controller-manager-addons-747503" [f4cfdf92-3a76-49c4-b1f6-3bc7cf34cd49] Running
	I0420 00:49:35.482420 1644261 system_pods.go:61] "kube-ingress-dns-minikube" [ec712066-7b44-45dc-a961-0f7688a75714] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0420 00:49:35.482431 1644261 system_pods.go:61] "kube-proxy-cmk9r" [13976009-573c-4b43-8062-07d9a92cb809] Running
	I0420 00:49:35.482441 1644261 system_pods.go:61] "kube-scheduler-addons-747503" [4c4ccef8-4e11-425f-9dc6-178584aa294d] Running
	I0420 00:49:35.482445 1644261 system_pods.go:61] "metrics-server-c59844bb4-jmtz4" [582654f0-7046-465f-b015-d889d5397c3c] Running
	I0420 00:49:35.482458 1644261 system_pods.go:61] "nvidia-device-plugin-daemonset-8wcvh" [1dc1e685-c035-4a95-99c7-d40ef680694c] Running
	I0420 00:49:35.482462 1644261 system_pods.go:61] "registry-proxy-5c8mf" [78326941-b968-43a4-865c-3f7c843b92c7] Running
	I0420 00:49:35.482466 1644261 system_pods.go:61] "registry-sx6fv" [c3fda03d-8cd2-4cff-9835-e17c079b7e05] Running
	I0420 00:49:35.482470 1644261 system_pods.go:61] "snapshot-controller-745499f584-7chnh" [1d82f222-8775-4214-b579-247919a249be] Running
	I0420 00:49:35.482474 1644261 system_pods.go:61] "snapshot-controller-745499f584-nk457" [a90bbeca-e4e7-4d3e-9eda-bf44e5d15f2c] Running
	I0420 00:49:35.482478 1644261 system_pods.go:61] "storage-provisioner" [c64f875a-fc82-45a9-acce-a3f649735d47] Running
	I0420 00:49:35.482493 1644261 system_pods.go:74] duration metric: took 11.176551903s to wait for pod list to return data ...
	I0420 00:49:35.482501 1644261 default_sa.go:34] waiting for default service account to be created ...
	I0420 00:49:35.485056 1644261 default_sa.go:45] found service account: "default"
	I0420 00:49:35.485086 1644261 default_sa.go:55] duration metric: took 2.576218ms for default service account to be created ...
	I0420 00:49:35.485096 1644261 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 00:49:35.495868 1644261 system_pods.go:86] 18 kube-system pods found
	I0420 00:49:35.495904 1644261 system_pods.go:89] "coredns-7db6d8ff4d-pj8wd" [ce9c9144-65d1-45f2-a6e0-65ac4c220237] Running
	I0420 00:49:35.495912 1644261 system_pods.go:89] "csi-hostpath-attacher-0" [1407d955-83ec-4b1d-ac07-d55e593f975f] Running
	I0420 00:49:35.495918 1644261 system_pods.go:89] "csi-hostpath-resizer-0" [023884e7-abc6-4359-95ba-ee8031b2db76] Running
	I0420 00:49:35.495922 1644261 system_pods.go:89] "csi-hostpathplugin-z7j5n" [b938be04-8aac-427e-a62d-e0d6ecea4fe9] Running
	I0420 00:49:35.495926 1644261 system_pods.go:89] "etcd-addons-747503" [707cce58-27c7-483a-9f12-80d354c6e443] Running
	I0420 00:49:35.495931 1644261 system_pods.go:89] "kindnet-x7szp" [910dbd2a-9863-4585-8a5d-98c1bb4817e2] Running
	I0420 00:49:35.495936 1644261 system_pods.go:89] "kube-apiserver-addons-747503" [81db4265-6e75-41b4-85b6-c7e09e1979a7] Running
	I0420 00:49:35.495940 1644261 system_pods.go:89] "kube-controller-manager-addons-747503" [f4cfdf92-3a76-49c4-b1f6-3bc7cf34cd49] Running
	I0420 00:49:35.495951 1644261 system_pods.go:89] "kube-ingress-dns-minikube" [ec712066-7b44-45dc-a961-0f7688a75714] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0420 00:49:35.495962 1644261 system_pods.go:89] "kube-proxy-cmk9r" [13976009-573c-4b43-8062-07d9a92cb809] Running
	I0420 00:49:35.495977 1644261 system_pods.go:89] "kube-scheduler-addons-747503" [4c4ccef8-4e11-425f-9dc6-178584aa294d] Running
	I0420 00:49:35.495981 1644261 system_pods.go:89] "metrics-server-c59844bb4-jmtz4" [582654f0-7046-465f-b015-d889d5397c3c] Running
	I0420 00:49:35.495986 1644261 system_pods.go:89] "nvidia-device-plugin-daemonset-8wcvh" [1dc1e685-c035-4a95-99c7-d40ef680694c] Running
	I0420 00:49:35.495993 1644261 system_pods.go:89] "registry-proxy-5c8mf" [78326941-b968-43a4-865c-3f7c843b92c7] Running
	I0420 00:49:35.495999 1644261 system_pods.go:89] "registry-sx6fv" [c3fda03d-8cd2-4cff-9835-e17c079b7e05] Running
	I0420 00:49:35.496006 1644261 system_pods.go:89] "snapshot-controller-745499f584-7chnh" [1d82f222-8775-4214-b579-247919a249be] Running
	I0420 00:49:35.496011 1644261 system_pods.go:89] "snapshot-controller-745499f584-nk457" [a90bbeca-e4e7-4d3e-9eda-bf44e5d15f2c] Running
	I0420 00:49:35.496015 1644261 system_pods.go:89] "storage-provisioner" [c64f875a-fc82-45a9-acce-a3f649735d47] Running
	I0420 00:49:35.496023 1644261 system_pods.go:126] duration metric: took 10.920416ms to wait for k8s-apps to be running ...
	I0420 00:49:35.496034 1644261 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 00:49:35.496098 1644261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:49:35.510334 1644261 system_svc.go:56] duration metric: took 14.291022ms WaitForService to wait for kubelet
	I0420 00:49:35.510421 1644261 kubeadm.go:576] duration metric: took 2m19.328937561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:49:35.510458 1644261 node_conditions.go:102] verifying NodePressure condition ...
	I0420 00:49:35.513887 1644261 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0420 00:49:35.513920 1644261 node_conditions.go:123] node cpu capacity is 2
	I0420 00:49:35.513932 1644261 node_conditions.go:105] duration metric: took 3.453007ms to run NodePressure ...
	I0420 00:49:35.513944 1644261 start.go:240] waiting for startup goroutines ...
	I0420 00:49:35.513972 1644261 start.go:245] waiting for cluster config update ...
	I0420 00:49:35.514000 1644261 start.go:254] writing updated cluster config ...
	I0420 00:49:35.514532 1644261 ssh_runner.go:195] Run: rm -f paused
	I0420 00:49:35.939859 1644261 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 00:49:35.941995 1644261 out.go:177] * Done! kubectl is now configured to use "addons-747503" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 20 00:53:39 addons-747503 crio[920]: time="2024-04-20 00:53:39.358529850Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=501231ae-5469-4079-955d-a182a6fdd3a6 name=/runtime.v1.ImageService/ImageStatus
	Apr 20 00:53:39 addons-747503 crio[920]: time="2024-04-20 00:53:39.359495154Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=83e7dd07-5a58-4545-bf5c-1bbbf3964ebb name=/runtime.v1.ImageService/ImageStatus
	Apr 20 00:53:39 addons-747503 crio[920]: time="2024-04-20 00:53:39.359709597Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=83e7dd07-5a58-4545-bf5c-1bbbf3964ebb name=/runtime.v1.ImageService/ImageStatus
	Apr 20 00:53:39 addons-747503 crio[920]: time="2024-04-20 00:53:39.360507654Z" level=info msg="Creating container: default/hello-world-app-86c47465fc-j7hjs/hello-world-app" id=57d3c038-7feb-4f35-bfcb-6797bdcb484a name=/runtime.v1.RuntimeService/CreateContainer
	Apr 20 00:53:39 addons-747503 crio[920]: time="2024-04-20 00:53:39.360603799Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 20 00:53:39 addons-747503 crio[920]: time="2024-04-20 00:53:39.424820313Z" level=info msg="Created container 08c1bcfd72f398136b60482fe95a4f6e470e05801209f5833146c8dc87586ba8: default/hello-world-app-86c47465fc-j7hjs/hello-world-app" id=57d3c038-7feb-4f35-bfcb-6797bdcb484a name=/runtime.v1.RuntimeService/CreateContainer
	Apr 20 00:53:39 addons-747503 crio[920]: time="2024-04-20 00:53:39.425942741Z" level=info msg="Starting container: 08c1bcfd72f398136b60482fe95a4f6e470e05801209f5833146c8dc87586ba8" id=1c403157-0790-4b80-8e4f-7c907b0dc624 name=/runtime.v1.RuntimeService/StartContainer
	Apr 20 00:53:39 addons-747503 crio[920]: time="2024-04-20 00:53:39.434833981Z" level=info msg="Started container" PID=8725 containerID=08c1bcfd72f398136b60482fe95a4f6e470e05801209f5833146c8dc87586ba8 description=default/hello-world-app-86c47465fc-j7hjs/hello-world-app id=1c403157-0790-4b80-8e4f-7c907b0dc624 name=/runtime.v1.RuntimeService/StartContainer sandboxID=794425e63ed03b2adefd1957ebb2c482f0f81978b5dc0fd339dffcd01ca4fff5
	Apr 20 00:53:39 addons-747503 conmon[8713]: conmon 08c1bcfd72f398136b60 <ninfo>: container 8725 exited with status 1
	Apr 20 00:53:39 addons-747503 crio[920]: time="2024-04-20 00:53:39.996464754Z" level=warning msg="Stopping container 350ed3938381cd9760ab19844905b96dbbfa9ade65937e486d218e771049ec1f with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=5afad98b-03b9-4b15-bdbf-98c4fb076eaf name=/runtime.v1.RuntimeService/StopContainer
	Apr 20 00:53:40 addons-747503 conmon[5408]: conmon 350ed3938381cd9760ab <ninfo>: container 5419 exited with status 137
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.142914882Z" level=info msg="Stopped container 350ed3938381cd9760ab19844905b96dbbfa9ade65937e486d218e771049ec1f: ingress-nginx/ingress-nginx-controller-84df5799c-j6rf2/controller" id=5afad98b-03b9-4b15-bdbf-98c4fb076eaf name=/runtime.v1.RuntimeService/StopContainer
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.143462254Z" level=info msg="Stopping pod sandbox: a4bb8a14e383ec73d5be06313f2bbd9841dfbd9f2e9f5d30f1ead2a7f21968df" id=068ae8b3-bde3-4176-9b28-3acc92002b51 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.146854646Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-4CBCNKG7HHUNFQXP - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-K3E3LFPQ7DD5Y3FM - [0:0]\n-X KUBE-HP-4CBCNKG7HHUNFQXP\n-X KUBE-HP-K3E3LFPQ7DD5Y3FM\nCOMMIT\n"
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.148398444Z" level=info msg="Closing host port tcp:80"
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.148448461Z" level=info msg="Closing host port tcp:443"
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.149870704Z" level=info msg="Host port tcp:80 does not have an open socket"
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.149901029Z" level=info msg="Host port tcp:443 does not have an open socket"
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.150069558Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-84df5799c-j6rf2 Namespace:ingress-nginx ID:a4bb8a14e383ec73d5be06313f2bbd9841dfbd9f2e9f5d30f1ead2a7f21968df UID:d5ddb622-d22b-4f2b-b279-493146cf0e6a NetNS:/var/run/netns/fbd8a93f-8591-437e-8e94-a9cb5511ccf6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.150228355Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-84df5799c-j6rf2 from CNI network \"kindnet\" (type=ptp)"
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.182038463Z" level=info msg="Stopped pod sandbox: a4bb8a14e383ec73d5be06313f2bbd9841dfbd9f2e9f5d30f1ead2a7f21968df" id=068ae8b3-bde3-4176-9b28-3acc92002b51 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.241133243Z" level=info msg="Removing container: 350ed3938381cd9760ab19844905b96dbbfa9ade65937e486d218e771049ec1f" id=c306611b-47e4-455a-a88d-97c86d92ab1b name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.264853614Z" level=info msg="Removed container 350ed3938381cd9760ab19844905b96dbbfa9ade65937e486d218e771049ec1f: ingress-nginx/ingress-nginx-controller-84df5799c-j6rf2/controller" id=c306611b-47e4-455a-a88d-97c86d92ab1b name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.266834749Z" level=info msg="Removing container: ecb26851b5633392f87949fd0f71fe5ae22513f143549b1d1f26877dd8cde6a1" id=36aa146b-c810-4535-a7ac-0753593e952f name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 20 00:53:40 addons-747503 crio[920]: time="2024-04-20 00:53:40.282934702Z" level=info msg="Removed container ecb26851b5633392f87949fd0f71fe5ae22513f143549b1d1f26877dd8cde6a1: default/hello-world-app-86c47465fc-j7hjs/hello-world-app" id=36aa146b-c810-4535-a7ac-0753593e952f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	08c1bcfd72f39       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             5 seconds ago       Exited              hello-world-app           2                   794425e63ed03       hello-world-app-86c47465fc-j7hjs
	70d7dbd6eb021       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:a0c58a03bd7b31512e187f86e72a18feb6fb938e744a713efcfe5ef5418aa1cd            11 seconds ago      Exited              gadget                    6                   91e2dd6b49779       gadget-j48lz
	3059b1b73e48e       docker.io/library/nginx@sha256:7bd88800d8c18d4f73feeee25e04fcdbeecfc5e0a2b7254a90f4816bb67beadd                              2 minutes ago       Running             nginx                     0                   3d907a9a1b360       nginx
	065eeb203edc3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 4 minutes ago       Running             gcp-auth                  0                   12318430bbe8d       gcp-auth-5db96cd9b4-dg9c5
	c7bd8cacd1c82       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              5 minutes ago       Running             yakd                      0                   b1192b4bbd9c1       yakd-dashboard-5ddbf7d777-q5cff
	ce2467374744f       1a024e390dd050d584b5c93bb30810e8be713157ab713b0d77a7af14dfe88c1e                                                             5 minutes ago       Exited              patch                     1                   df393a718fd03       ingress-nginx-admission-patch-zm4pk
	2bd8cbc349360       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0b1098ef00acee905f9736f98dd151af0a38d0fef0ccf9fb5ad189b20933e5f8   5 minutes ago       Exited              create                    0                   f2bef6044a1b7       ingress-nginx-admission-create-p788l
	d44171fb37303       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        5 minutes ago       Running             metrics-server            0                   48679652e7ffe       metrics-server-c59844bb4-jmtz4
	22c56a3e8a0fe       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   d7e961b6341a3       storage-provisioner
	dfc51e1c1bccd       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             5 minutes ago       Running             coredns                   0                   1d5e91a66a006       coredns-7db6d8ff4d-pj8wd
	b21e49c0bda54       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                                             6 minutes ago       Running             kindnet-cni               0                   48b6b802564de       kindnet-x7szp
	8504f24d60ff9       cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f                                                             6 minutes ago       Running             kube-proxy                0                   a5fb2119d00b2       kube-proxy-cmk9r
	efdbc1a5337c8       547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a                                                             6 minutes ago       Running             kube-scheduler            0                   db745aaf12fb3       kube-scheduler-addons-747503
	d7b31a1429803       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb                                                             6 minutes ago       Running             kube-apiserver            0                   e3358216037d1       kube-apiserver-addons-747503
	120c278a1bb92       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1                                                             6 minutes ago       Running             kube-controller-manager   0                   32129d92cb9e3       kube-controller-manager-addons-747503
	dc5579e3b8be4       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             6 minutes ago       Running             etcd                      0                   0793765290d5b       etcd-addons-747503
	
	
	==> coredns [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df] <==
	[INFO] 10.244.0.20:39271 - 49899 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000182116s
	[INFO] 10.244.0.20:39271 - 36853 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00015339s
	[INFO] 10.244.0.20:39271 - 54108 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071358s
	[INFO] 10.244.0.20:39271 - 23540 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064244s
	[INFO] 10.244.0.20:39271 - 36450 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001425418s
	[INFO] 10.244.0.20:39271 - 1160 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001283564s
	[INFO] 10.244.0.20:39271 - 55387 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069496s
	[INFO] 10.244.0.20:50025 - 38066 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000115434s
	[INFO] 10.244.0.20:58091 - 47793 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042502s
	[INFO] 10.244.0.20:50025 - 53522 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067075s
	[INFO] 10.244.0.20:50025 - 60527 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046842s
	[INFO] 10.244.0.20:50025 - 38545 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050107s
	[INFO] 10.244.0.20:58091 - 3541 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000036602s
	[INFO] 10.244.0.20:58091 - 40598 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040417s
	[INFO] 10.244.0.20:50025 - 10978 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030498s
	[INFO] 10.244.0.20:50025 - 47695 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048064s
	[INFO] 10.244.0.20:58091 - 53055 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050575s
	[INFO] 10.244.0.20:58091 - 54793 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037406s
	[INFO] 10.244.0.20:58091 - 6655 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000079702s
	[INFO] 10.244.0.20:50025 - 33288 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001386388s
	[INFO] 10.244.0.20:58091 - 16986 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004227329s
	[INFO] 10.244.0.20:50025 - 6931 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003492319s
	[INFO] 10.244.0.20:50025 - 7397 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061315s
	[INFO] 10.244.0.20:58091 - 1995 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001219836s
	[INFO] 10.244.0.20:58091 - 57192 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006737s
	
	
	==> describe nodes <==
	Name:               addons-747503
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-747503
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=addons-747503
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T00_47_03_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-747503
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:46:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-747503
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:53:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:53:42 +0000   Sat, 20 Apr 2024 00:46:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:53:42 +0000   Sat, 20 Apr 2024 00:46:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:53:42 +0000   Sat, 20 Apr 2024 00:46:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:53:42 +0000   Sat, 20 Apr 2024 00:47:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-747503
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb345c96e51549588e3445f8f88cea8c
	  System UUID:                338aa8bd-646a-4cfc-b77a-f650366b6c8a
	  Boot ID:                    cdaae8f5-66dd-4dda-afdc-9b84bbb262c1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-j7hjs         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gadget                      gadget-j48lz                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  gcp-auth                    gcp-auth-5db96cd9b4-dg9c5                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 coredns-7db6d8ff4d-pj8wd                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m29s
	  kube-system                 etcd-addons-747503                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m43s
	  kube-system                 kindnet-x7szp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m29s
	  kube-system                 kube-apiserver-addons-747503             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 kube-controller-manager-addons-747503    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 kube-proxy-cmk9r                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 kube-scheduler-addons-747503             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 metrics-server-c59844bb4-jmtz4           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m25s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-q5cff          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     6m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             548Mi (6%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m50s (x8 over 6m50s)  kubelet          Node addons-747503 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m50s (x8 over 6m50s)  kubelet          Node addons-747503 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m50s (x8 over 6m50s)  kubelet          Node addons-747503 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m43s                  kubelet          Node addons-747503 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s                  kubelet          Node addons-747503 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s                  kubelet          Node addons-747503 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m30s                  node-controller  Node addons-747503 event: Registered Node addons-747503 in Controller
	  Normal  NodeReady                5m55s                  kubelet          Node addons-747503 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000807] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000960] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000be586629
	[  +0.001092] FS-Cache: N-key=[8] '15d8c90000000000'
	[  +0.002828] FS-Cache: Duplicate cookie detected
	[  +0.000717] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=00000000ead4e9ad{9p.inode} n=000000008f558ce4
	[  +0.001060] FS-Cache: O-key=[8] '15d8c90000000000'
	[  +0.000703] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000906] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000f46698ff
	[  +0.001011] FS-Cache: N-key=[8] '15d8c90000000000'
	[  +3.061970] FS-Cache: Duplicate cookie detected
	[  +0.000754] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001064] FS-Cache: O-cookie d=00000000ead4e9ad{9p.inode} n=00000000ea440894
	[  +0.001029] FS-Cache: O-key=[8] '14d8c90000000000'
	[  +0.000778] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001045] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000999f4db4
	[  +0.001563] FS-Cache: N-key=[8] '14d8c90000000000'
	[  +0.297624] FS-Cache: Duplicate cookie detected
	[  +0.000690] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000919] FS-Cache: O-cookie d=00000000ead4e9ad{9p.inode} n=00000000e5d6a697
	[  +0.001014] FS-Cache: O-key=[8] '1ad8c90000000000'
	[  +0.000691] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001016] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000be586629
	[  +0.001047] FS-Cache: N-key=[8] '1ad8c90000000000'
	[Apr20 00:19] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330] <==
	{"level":"info","ts":"2024-04-20T00:46:57.326346Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T00:46:57.327877Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-20T00:46:57.329336Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-20T00:46:57.333604Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:46:57.388525Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:46:57.388574Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-04-20T00:47:16.924241Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.94691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-cmk9r\" ","response":"range_response_count:1 size:4633"}
	{"level":"info","ts":"2024-04-20T00:47:16.924761Z","caller":"traceutil/trace.go:171","msg":"trace[1215544873] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-cmk9r; range_end:; response_count:1; response_revision:378; }","duration":"125.496234ms","start":"2024-04-20T00:47:16.799258Z","end":"2024-04-20T00:47:16.924754Z","steps":["trace[1215544873] 'agreement among raft nodes before linearized reading'  (duration: 124.890747ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:16.924393Z","caller":"traceutil/trace.go:171","msg":"trace[153633669] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"125.22924ms","start":"2024-04-20T00:47:16.799148Z","end":"2024-04-20T00:47:16.924378Z","steps":["trace[153633669] 'process raft request'  (duration: 124.905082ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:16.924546Z","caller":"traceutil/trace.go:171","msg":"trace[925466315] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"125.445036ms","start":"2024-04-20T00:47:16.799094Z","end":"2024-04-20T00:47:16.924539Z","steps":["trace[925466315] 'process raft request'  (duration: 124.874059ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:16.924681Z","caller":"traceutil/trace.go:171","msg":"trace[470147017] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"125.389169ms","start":"2024-04-20T00:47:16.799285Z","end":"2024-04-20T00:47:16.924674Z","steps":["trace[470147017] 'process raft request'  (duration: 124.793429ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:16.92472Z","caller":"traceutil/trace.go:171","msg":"trace[1120083311] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"125.391983ms","start":"2024-04-20T00:47:16.799322Z","end":"2024-04-20T00:47:16.924714Z","steps":["trace[1120083311] 'process raft request'  (duration: 124.790048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:18.67798Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.383847ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-x7szp\" ","response":"range_response_count:1 size:4910"}
	{"level":"info","ts":"2024-04-20T00:47:18.755921Z","caller":"traceutil/trace.go:171","msg":"trace[1811675749] linearizableReadLoop","detail":"{readStateIndex:402; appliedIndex:402; }","duration":"118.796807ms","start":"2024-04-20T00:47:18.637099Z","end":"2024-04-20T00:47:18.755896Z","steps":["trace[1811675749] 'read index received'  (duration: 118.790883ms)","trace[1811675749] 'applied index is now lower than readState.Index'  (duration: 4.701µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-20T00:47:18.789714Z","caller":"traceutil/trace.go:171","msg":"trace[1490804175] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-x7szp; range_end:; response_count:1; response_revision:389; }","duration":"308.029626ms","start":"2024-04-20T00:47:18.448578Z","end":"2024-04-20T00:47:18.756608Z","steps":["trace[1490804175] 'agreement among raft nodes before linearized reading'  (duration: 229.28301ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:19.14332Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:47:18.448539Z","time spent":"694.609452ms","remote":"127.0.0.1:48224","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":4934,"request content":"key:\"/registry/pods/kube-system/kindnet-x7szp\" "}
	{"level":"warn","ts":"2024-04-20T00:47:19.166679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"490.097119ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-7db6d8ff4d-pj8wd.17c7d681fb332ee5\" ","response":"range_response_count:1 size:844"}
	{"level":"info","ts":"2024-04-20T00:47:19.183579Z","caller":"traceutil/trace.go:171","msg":"trace[1385382603] range","detail":"{range_begin:/registry/events/kube-system/coredns-7db6d8ff4d-pj8wd.17c7d681fb332ee5; range_end:; response_count:1; response_revision:389; }","duration":"497.334583ms","start":"2024-04-20T00:47:18.676557Z","end":"2024-04-20T00:47:19.173891Z","steps":["trace[1385382603] 'agreement among raft nodes before linearized reading'  (duration: 490.031645ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:19.189018Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:47:18.676516Z","time spent":"512.471167ms","remote":"127.0.0.1:48100","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":868,"request content":"key:\"/registry/events/kube-system/coredns-7db6d8ff4d-pj8wd.17c7d681fb332ee5\" "}
	{"level":"warn","ts":"2024-04-20T00:47:19.186049Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"429.718807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-04-20T00:47:19.186088Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"508.974865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-20T00:47:19.191942Z","caller":"traceutil/trace.go:171","msg":"trace[2029096551] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:389; }","duration":"514.81594ms","start":"2024-04-20T00:47:18.677109Z","end":"2024-04-20T00:47:19.191925Z","steps":["trace[2029096551] 'agreement among raft nodes before linearized reading'  (duration: 508.967169ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:19.25392Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:47:18.677088Z","time spent":"576.805526ms","remote":"127.0.0.1:48534","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":0,"response size":29,"request content":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" "}
	{"level":"info","ts":"2024-04-20T00:47:19.224875Z","caller":"traceutil/trace.go:171","msg":"trace[1801637594] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:389; }","duration":"468.545511ms","start":"2024-04-20T00:47:18.756311Z","end":"2024-04-20T00:47:19.224857Z","steps":["trace[1801637594] 'agreement among raft nodes before linearized reading'  (duration: 429.691937ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:19.254181Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:47:18.756257Z","time spent":"497.915488ms","remote":"127.0.0.1:48514","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":0,"response size":29,"request content":"key:\"/registry/deployments/default/cloud-spanner-emulator\" "}
	
	
	==> gcp-auth [065eeb203edc3606ff24136ef272bf67f73b81ea9764ef0b86090be0bcf9d3e6] <==
	2024/04/20 00:48:57 GCP Auth Webhook started!
	2024/04/20 00:49:47 Ready to marshal response ...
	2024/04/20 00:49:47 Ready to write response ...
	2024/04/20 00:49:47 Ready to marshal response ...
	2024/04/20 00:49:47 Ready to write response ...
	2024/04/20 00:50:05 Ready to marshal response ...
	2024/04/20 00:50:05 Ready to write response ...
	2024/04/20 00:50:05 Ready to marshal response ...
	2024/04/20 00:50:05 Ready to write response ...
	2024/04/20 00:50:12 Ready to marshal response ...
	2024/04/20 00:50:12 Ready to write response ...
	2024/04/20 00:50:14 Ready to marshal response ...
	2024/04/20 00:50:14 Ready to write response ...
	2024/04/20 00:50:58 Ready to marshal response ...
	2024/04/20 00:50:58 Ready to write response ...
	2024/04/20 00:53:19 Ready to marshal response ...
	2024/04/20 00:53:19 Ready to write response ...
	
	
	==> kernel <==
	 00:53:45 up  7:36,  0 users,  load average: 0.32, 1.22, 1.93
	Linux addons-747503 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52] <==
	I0420 00:51:40.678951       1 main.go:227] handling current node
	I0420 00:51:50.687872       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:51:50.687904       1 main.go:227] handling current node
	I0420 00:52:00.698448       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:52:00.698476       1 main.go:227] handling current node
	I0420 00:52:10.708866       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:52:10.708895       1 main.go:227] handling current node
	I0420 00:52:20.712587       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:52:20.712617       1 main.go:227] handling current node
	I0420 00:52:30.718567       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:52:30.718594       1 main.go:227] handling current node
	I0420 00:52:40.729729       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:52:40.729754       1 main.go:227] handling current node
	I0420 00:52:50.742262       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:52:50.742294       1 main.go:227] handling current node
	I0420 00:53:00.747382       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:53:00.747410       1 main.go:227] handling current node
	I0420 00:53:10.759846       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:53:10.759875       1 main.go:227] handling current node
	I0420 00:53:20.764737       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:53:20.764767       1 main.go:227] handling current node
	I0420 00:53:30.768654       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:53:30.768682       1 main.go:227] handling current node
	I0420 00:53:40.772985       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:53:40.773013       1 main.go:227] handling current node
	
	
	==> kube-apiserver [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053] <==
	W0420 00:49:01.186508       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 00:49:01.186618       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0420 00:49:01.187795       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.3.226:443: connect: connection refused
	E0420 00:49:01.193314       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.3.226:443: connect: connection refused
	E0420 00:49:01.214414       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.3.226:443: connect: connection refused
	I0420 00:49:01.426379       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	http2: server: error reading preface from client 192.168.49.1:38014: read tcp 192.168.49.2:8443->192.168.49.1:38014: read: connection reset by peer
	I0420 00:50:00.595117       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0420 00:50:28.670545       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:50:28.670597       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:50:28.715804       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:50:28.715853       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:50:28.748957       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:50:28.749037       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:50:28.834347       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:50:28.836069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0420 00:50:28.986860       1 watch.go:250] http2: stream closed
	W0420 00:50:29.716466       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0420 00:50:29.834147       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0420 00:50:29.854747       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0420 00:50:30.599412       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0420 00:50:58.242507       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0420 00:50:58.551717       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.222.26"}
	I0420 00:53:19.962474       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.159.228"}
	
	
	==> kube-controller-manager [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e] <==
	W0420 00:52:09.785915       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:52:09.785953       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:52:30.249165       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:52:30.249216       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:52:48.491123       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:52:48.491162       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:52:48.805339       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:52:48.805375       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:53:05.368538       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:53:05.368578       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0420 00:53:19.766414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="45.377294ms"
	I0420 00:53:19.806065       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="39.590249ms"
	I0420 00:53:19.838979       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="32.756179ms"
	I0420 00:53:19.839092       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="44.865µs"
	I0420 00:53:24.213993       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="43.108µs"
	I0420 00:53:25.216299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="39.867µs"
	W0420 00:53:32.721311       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:53:32.721349       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0420 00:53:36.964047       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0420 00:53:36.971073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="4.701µs"
	I0420 00:53:36.975154       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0420 00:53:39.374558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="39.982µs"
	I0420 00:53:40.262851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="850.527µs"
	W0420 00:53:44.924672       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:53:44.924709       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20] <==
	I0420 00:47:21.144641       1 server_linux.go:69] "Using iptables proxy"
	I0420 00:47:21.218151       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0420 00:47:21.752491       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0420 00:47:21.752619       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:47:21.778013       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0420 00:47:21.778127       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0420 00:47:21.778180       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:47:21.778432       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:47:21.778963       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:47:21.779916       1 config.go:192] "Starting service config controller"
	I0420 00:47:21.780016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:47:21.780072       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:47:21.780100       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:47:21.780654       1 config.go:319] "Starting node config controller"
	I0420 00:47:21.780711       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 00:47:21.880269       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:47:21.885308       1 shared_informer.go:320] Caches are synced for node config
	I0420 00:47:21.885552       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929] <==
	W0420 00:46:59.940609       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:46:59.940660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:46:59.940762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 00:46:59.940813       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 00:46:59.940912       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 00:46:59.940952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 00:46:59.941040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 00:46:59.941188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 00:46:59.941145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 00:46:59.941293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0420 00:47:00.905621       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:47:00.905761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:47:00.906996       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 00:47:00.907089       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 00:47:00.942823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 00:47:00.942862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 00:47:00.951864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 00:47:00.952007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 00:47:01.030920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 00:47:01.031101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 00:47:01.032673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0420 00:47:01.032706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0420 00:47:01.039542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 00:47:01.039661       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0420 00:47:03.025762       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 00:53:35 addons-747503 kubelet[1518]: I0420 00:53:35.865748    1518 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec712066-7b44-45dc-a961-0f7688a75714-kube-api-access-zcdxc" (OuterVolumeSpecName: "kube-api-access-zcdxc") pod "ec712066-7b44-45dc-a961-0f7688a75714" (UID: "ec712066-7b44-45dc-a961-0f7688a75714"). InnerVolumeSpecName "kube-api-access-zcdxc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 20 00:53:35 addons-747503 kubelet[1518]: I0420 00:53:35.963079    1518 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zcdxc\" (UniqueName: \"kubernetes.io/projected/ec712066-7b44-45dc-a961-0f7688a75714-kube-api-access-zcdxc\") on node \"addons-747503\" DevicePath \"\""
	Apr 20 00:53:36 addons-747503 kubelet[1518]: I0420 00:53:36.227027    1518 scope.go:117] "RemoveContainer" containerID="70d7dbd6eb021afc55b2fda57c7aa746c8427d85be31cc4a1d42b29801b97d47"
	Apr 20 00:53:36 addons-747503 kubelet[1518]: E0420 00:53:36.227489    1518 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-j48lz_gadget(1c6fda8f-82c7-43ad-8c7d-11de076291e3)\"" pod="gadget/gadget-j48lz" podUID="1c6fda8f-82c7-43ad-8c7d-11de076291e3"
	Apr 20 00:53:36 addons-747503 kubelet[1518]: I0420 00:53:36.228726    1518 scope.go:117] "RemoveContainer" containerID="3bb984c846326537d28760d5ead2df08b0056f36b4ff3617b450a2be724ef733"
	Apr 20 00:53:36 addons-747503 kubelet[1518]: I0420 00:53:36.359776    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec712066-7b44-45dc-a961-0f7688a75714" path="/var/lib/kubelet/pods/ec712066-7b44-45dc-a961-0f7688a75714/volumes"
	Apr 20 00:53:37 addons-747503 kubelet[1518]: I0420 00:53:37.231453    1518 scope.go:117] "RemoveContainer" containerID="70d7dbd6eb021afc55b2fda57c7aa746c8427d85be31cc4a1d42b29801b97d47"
	Apr 20 00:53:37 addons-747503 kubelet[1518]: E0420 00:53:37.231915    1518 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-j48lz_gadget(1c6fda8f-82c7-43ad-8c7d-11de076291e3)\"" pod="gadget/gadget-j48lz" podUID="1c6fda8f-82c7-43ad-8c7d-11de076291e3"
	Apr 20 00:53:38 addons-747503 kubelet[1518]: I0420 00:53:38.359610    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8afdedc-2b46-4711-b613-5ae6a9039bf5" path="/var/lib/kubelet/pods/a8afdedc-2b46-4711-b613-5ae6a9039bf5/volumes"
	Apr 20 00:53:38 addons-747503 kubelet[1518]: I0420 00:53:38.359986    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b505563e-8c86-4278-b14f-806a393d3845" path="/var/lib/kubelet/pods/b505563e-8c86-4278-b14f-806a393d3845/volumes"
	Apr 20 00:53:39 addons-747503 kubelet[1518]: I0420 00:53:39.357679    1518 scope.go:117] "RemoveContainer" containerID="ecb26851b5633392f87949fd0f71fe5ae22513f143549b1d1f26877dd8cde6a1"
	Apr 20 00:53:40 addons-747503 kubelet[1518]: I0420 00:53:40.239393    1518 scope.go:117] "RemoveContainer" containerID="350ed3938381cd9760ab19844905b96dbbfa9ade65937e486d218e771049ec1f"
	Apr 20 00:53:40 addons-747503 kubelet[1518]: I0420 00:53:40.242947    1518 scope.go:117] "RemoveContainer" containerID="08c1bcfd72f398136b60482fe95a4f6e470e05801209f5833146c8dc87586ba8"
	Apr 20 00:53:40 addons-747503 kubelet[1518]: E0420 00:53:40.243219    1518 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-j7hjs_default(c7fb6036-110e-4661-aca3-2f00006c27de)\"" pod="default/hello-world-app-86c47465fc-j7hjs" podUID="c7fb6036-110e-4661-aca3-2f00006c27de"
	Apr 20 00:53:40 addons-747503 kubelet[1518]: I0420 00:53:40.265113    1518 scope.go:117] "RemoveContainer" containerID="350ed3938381cd9760ab19844905b96dbbfa9ade65937e486d218e771049ec1f"
	Apr 20 00:53:40 addons-747503 kubelet[1518]: E0420 00:53:40.265666    1518 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"350ed3938381cd9760ab19844905b96dbbfa9ade65937e486d218e771049ec1f\": container with ID starting with 350ed3938381cd9760ab19844905b96dbbfa9ade65937e486d218e771049ec1f not found: ID does not exist" containerID="350ed3938381cd9760ab19844905b96dbbfa9ade65937e486d218e771049ec1f"
	Apr 20 00:53:40 addons-747503 kubelet[1518]: I0420 00:53:40.265712    1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"350ed3938381cd9760ab19844905b96dbbfa9ade65937e486d218e771049ec1f"} err="failed to get container status \"350ed3938381cd9760ab19844905b96dbbfa9ade65937e486d218e771049ec1f\": rpc error: code = NotFound desc = could not find container \"350ed3938381cd9760ab19844905b96dbbfa9ade65937e486d218e771049ec1f\": container with ID starting with 350ed3938381cd9760ab19844905b96dbbfa9ade65937e486d218e771049ec1f not found: ID does not exist"
	Apr 20 00:53:40 addons-747503 kubelet[1518]: I0420 00:53:40.265737    1518 scope.go:117] "RemoveContainer" containerID="ecb26851b5633392f87949fd0f71fe5ae22513f143549b1d1f26877dd8cde6a1"
	Apr 20 00:53:40 addons-747503 kubelet[1518]: I0420 00:53:40.292777    1518 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grx8s\" (UniqueName: \"kubernetes.io/projected/d5ddb622-d22b-4f2b-b279-493146cf0e6a-kube-api-access-grx8s\") pod \"d5ddb622-d22b-4f2b-b279-493146cf0e6a\" (UID: \"d5ddb622-d22b-4f2b-b279-493146cf0e6a\") "
	Apr 20 00:53:40 addons-747503 kubelet[1518]: I0420 00:53:40.292844    1518 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5ddb622-d22b-4f2b-b279-493146cf0e6a-webhook-cert\") pod \"d5ddb622-d22b-4f2b-b279-493146cf0e6a\" (UID: \"d5ddb622-d22b-4f2b-b279-493146cf0e6a\") "
	Apr 20 00:53:40 addons-747503 kubelet[1518]: I0420 00:53:40.295099    1518 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ddb622-d22b-4f2b-b279-493146cf0e6a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d5ddb622-d22b-4f2b-b279-493146cf0e6a" (UID: "d5ddb622-d22b-4f2b-b279-493146cf0e6a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 20 00:53:40 addons-747503 kubelet[1518]: I0420 00:53:40.295255    1518 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ddb622-d22b-4f2b-b279-493146cf0e6a-kube-api-access-grx8s" (OuterVolumeSpecName: "kube-api-access-grx8s") pod "d5ddb622-d22b-4f2b-b279-493146cf0e6a" (UID: "d5ddb622-d22b-4f2b-b279-493146cf0e6a"). InnerVolumeSpecName "kube-api-access-grx8s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 20 00:53:40 addons-747503 kubelet[1518]: I0420 00:53:40.358955    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ddb622-d22b-4f2b-b279-493146cf0e6a" path="/var/lib/kubelet/pods/d5ddb622-d22b-4f2b-b279-493146cf0e6a/volumes"
	Apr 20 00:53:40 addons-747503 kubelet[1518]: I0420 00:53:40.393363    1518 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5ddb622-d22b-4f2b-b279-493146cf0e6a-webhook-cert\") on node \"addons-747503\" DevicePath \"\""
	Apr 20 00:53:40 addons-747503 kubelet[1518]: I0420 00:53:40.393413    1518 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-grx8s\" (UniqueName: \"kubernetes.io/projected/d5ddb622-d22b-4f2b-b279-493146cf0e6a-kube-api-access-grx8s\") on node \"addons-747503\" DevicePath \"\""
	
	
	==> storage-provisioner [22c56a3e8a0fed567d434d23c22e5fb9e361b66b1c454f968e6ca7a6a7da876d] <==
	I0420 00:47:51.832097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0420 00:47:51.868002       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0420 00:47:51.868049       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0420 00:47:51.883550       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0420 00:47:51.884602       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69e938c5-ddcb-47d2-89e5-2e78c1a90077", APIVersion:"v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-747503_fa266937-e3d1-47aa-bd72-27b9ca80792a became leader
	I0420 00:47:51.887962       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-747503_fa266937-e3d1-47aa-bd72-27b9ca80792a!
	I0420 00:47:51.988837       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-747503_fa266937-e3d1-47aa-bd72-27b9ca80792a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-747503 -n addons-747503
helpers_test.go:261: (dbg) Run:  kubectl --context addons-747503 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.64s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (342.2s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.397573ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-jmtz4" [582654f0-7046-465f-b015-d889d5397c3c] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004796639s
addons_test.go:415: (dbg) Run:  kubectl --context addons-747503 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-747503 top pods -n kube-system: exit status 1 (94.835893ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pj8wd, age: 3m27.678096362s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-747503 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-747503 top pods -n kube-system: exit status 1 (83.997343ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pj8wd, age: 3m30.819374451s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-747503 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-747503 top pods -n kube-system: exit status 1 (93.750437ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pj8wd, age: 3m34.683677134s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-747503 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-747503 top pods -n kube-system: exit status 1 (109.205245ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pj8wd, age: 3m42.853066506s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-747503 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-747503 top pods -n kube-system: exit status 1 (81.011017ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pj8wd, age: 3m57.538764023s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-747503 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-747503 top pods -n kube-system: exit status 1 (87.951852ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pj8wd, age: 4m19.392589014s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-747503 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-747503 top pods -n kube-system: exit status 1 (152.492673ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pj8wd, age: 4m44.500961562s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-747503 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-747503 top pods -n kube-system: exit status 1 (96.032494ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pj8wd, age: 5m23.69625353s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-747503 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-747503 top pods -n kube-system: exit status 1 (88.290106ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pj8wd, age: 6m25.286065416s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-747503 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-747503 top pods -n kube-system: exit status 1 (84.997527ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pj8wd, age: 7m17.132096717s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-747503 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-747503 top pods -n kube-system: exit status 1 (85.636737ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pj8wd, age: 8m19.392227266s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-747503 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-747503 top pods -n kube-system: exit status 1 (90.013048ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-pj8wd, age: 9m1.561700039s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-747503
helpers_test.go:235: (dbg) docker inspect addons-747503:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56",
	        "Created": "2024-04-20T00:46:38.106832296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1644719,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-20T00:46:38.423221308Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3b2d88ca3ca9b0dbaf60124ea2550b937bd64c7063d7cb640718ddb37cba13b1",
	        "ResolvConfPath": "/var/lib/docker/containers/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56/hostname",
	        "HostsPath": "/var/lib/docker/containers/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56/hosts",
	        "LogPath": "/var/lib/docker/containers/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56-json.log",
	        "Name": "/addons-747503",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-747503:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-747503",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/11f58f8159fe4bcc9c388790d75da6c438cdd6b1e64ec9931ba42d5522190542-init/diff:/var/lib/docker/overlay2/e0471a8635b1d2c4e15ee92afa46c7d34f76188a5b6aa3cb3689b7cec908b9a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11f58f8159fe4bcc9c388790d75da6c438cdd6b1e64ec9931ba42d5522190542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11f58f8159fe4bcc9c388790d75da6c438cdd6b1e64ec9931ba42d5522190542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11f58f8159fe4bcc9c388790d75da6c438cdd6b1e64ec9931ba42d5522190542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-747503",
	                "Source": "/var/lib/docker/volumes/addons-747503/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-747503",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-747503",
	                "name.minikube.sigs.k8s.io": "addons-747503",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2bf060fd849aa8a792c66482994fdba957bcf5fad9bd2decda24bd7d8500a7b5",
	            "SandboxKey": "/var/run/docker/netns/2bf060fd849a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34675"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34674"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34671"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34673"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34672"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-747503": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "64e1715d5e750e9daed359ac38e3073a5c93c82f8a5daf2e135f2d0b5be8da62",
	                    "EndpointID": "31ed3dc6d507db832465fc3d5d178d5ab6552b0ea16ea63ec1d876b06129484e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-747503",
	                        "038fb1234c5e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-747503 -n addons-747503
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-747503 logs -n 25: (1.562347669s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-161385                                                                     | download-only-161385   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| delete  | -p download-only-784633                                                                     | download-only-784633   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| delete  | -p download-only-161385                                                                     | download-only-161385   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| start   | --download-only -p                                                                          | download-docker-407942 | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | download-docker-407942                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-407942                                                                   | download-docker-407942 | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-562090   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | binary-mirror-562090                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39787                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-562090                                                                     | binary-mirror-562090   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| addons  | enable dashboard -p                                                                         | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | addons-747503                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | addons-747503                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-747503 --wait=true                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-747503 ip                                                                            | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:49 UTC | 20 Apr 24 00:49 UTC |
	| addons  | addons-747503 addons disable                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:49 UTC | 20 Apr 24 00:49 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | -p addons-747503                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-747503 ssh cat                                                                       | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | /opt/local-path-provisioner/pvc-b29b3cd7-c850-4a4e-b0ba-8a8cc403a41d_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-747503 addons disable                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-747503 addons                                                                        | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-747503 addons                                                                        | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | addons-747503                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC |                     |
	|         | -p addons-747503                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-747503 ssh curl -s                                                                   | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:51 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-747503 ip                                                                            | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:53 UTC | 20 Apr 24 00:53 UTC |
	| addons  | addons-747503 addons disable                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:53 UTC | 20 Apr 24 00:53 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-747503 addons disable                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:53 UTC | 20 Apr 24 00:53 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:53 UTC | 20 Apr 24 00:53 UTC |
	|         | addons-747503                                                                               |                        |         |         |                     |                     |
	| addons  | addons-747503 addons                                                                        | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:56 UTC | 20 Apr 24 00:56 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 00:46:14
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 00:46:14.607015 1644261 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:46:14.607178 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:46:14.607208 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:46:14.607226 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:46:14.607498 1644261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	I0420 00:46:14.607984 1644261 out.go:298] Setting JSON to false
	I0420 00:46:14.608870 1644261 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":26921,"bootTime":1713547053,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0420 00:46:14.608940 1644261 start.go:139] virtualization:  
	I0420 00:46:14.612689 1644261 out.go:177] * [addons-747503] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0420 00:46:14.614357 1644261 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 00:46:14.616082 1644261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 00:46:14.614429 1644261 notify.go:220] Checking for updates...
	I0420 00:46:14.619849 1644261 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 00:46:14.621777 1644261 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	I0420 00:46:14.623523 1644261 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0420 00:46:14.625229 1644261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 00:46:14.627320 1644261 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 00:46:14.645723 1644261 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0420 00:46:14.645835 1644261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 00:46:14.712118 1644261 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-20 00:46:14.700723825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 00:46:14.712238 1644261 docker.go:295] overlay module found
	I0420 00:46:14.714333 1644261 out.go:177] * Using the docker driver based on user configuration
	I0420 00:46:14.715905 1644261 start.go:297] selected driver: docker
	I0420 00:46:14.715921 1644261 start.go:901] validating driver "docker" against <nil>
	I0420 00:46:14.715934 1644261 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 00:46:14.716574 1644261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 00:46:14.765511 1644261 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-20 00:46:14.755476473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 00:46:14.765687 1644261 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0420 00:46:14.765914 1644261 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:46:14.767783 1644261 out.go:177] * Using Docker driver with root privileges
	I0420 00:46:14.769372 1644261 cni.go:84] Creating CNI manager for ""
	I0420 00:46:14.769396 1644261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0420 00:46:14.769406 1644261 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0420 00:46:14.769486 1644261 start.go:340] cluster config:
	{Name:addons-747503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0420 00:46:14.771617 1644261 out.go:177] * Starting "addons-747503" primary control-plane node in "addons-747503" cluster
	I0420 00:46:14.773185 1644261 cache.go:121] Beginning downloading kic base image for docker with crio
	I0420 00:46:14.774855 1644261 out.go:177] * Pulling base image v0.0.43 ...
	I0420 00:46:14.776595 1644261 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:46:14.776634 1644261 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local docker daemon
	I0420 00:46:14.776648 1644261 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0420 00:46:14.776672 1644261 cache.go:56] Caching tarball of preloaded images
	I0420 00:46:14.776753 1644261 preload.go:173] Found /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0420 00:46:14.776764 1644261 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 00:46:14.777129 1644261 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/config.json ...
	I0420 00:46:14.777263 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/config.json: {Name:mkc5932488b9adc511b83497f974c2edc34e9770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:14.789608 1644261 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 to local cache
	I0420 00:46:14.789711 1644261 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local cache directory
	I0420 00:46:14.789728 1644261 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local cache directory, skipping pull
	I0420 00:46:14.789733 1644261 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 exists in cache, skipping pull
	I0420 00:46:14.789741 1644261 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 as a tarball
	I0420 00:46:14.789746 1644261 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 from local cache
	I0420 00:46:31.319259 1644261 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 from cached tarball
	I0420 00:46:31.319302 1644261 cache.go:194] Successfully downloaded all kic artifacts
	I0420 00:46:31.319332 1644261 start.go:360] acquireMachinesLock for addons-747503: {Name:mk90f80baada2f8c104726bc92d1956d63d494dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:46:31.319827 1644261 start.go:364] duration metric: took 471.731µs to acquireMachinesLock for "addons-747503"
	I0420 00:46:31.319867 1644261 start.go:93] Provisioning new machine with config: &{Name:addons-747503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:46:31.319953 1644261 start.go:125] createHost starting for "" (driver="docker")
	I0420 00:46:31.322194 1644261 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0420 00:46:31.322447 1644261 start.go:159] libmachine.API.Create for "addons-747503" (driver="docker")
	I0420 00:46:31.322484 1644261 client.go:168] LocalClient.Create starting
	I0420 00:46:31.322598 1644261 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem
	I0420 00:46:31.615216 1644261 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem
	I0420 00:46:31.818172 1644261 cli_runner.go:164] Run: docker network inspect addons-747503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0420 00:46:31.832341 1644261 cli_runner.go:211] docker network inspect addons-747503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0420 00:46:31.832434 1644261 network_create.go:281] running [docker network inspect addons-747503] to gather additional debugging logs...
	I0420 00:46:31.832456 1644261 cli_runner.go:164] Run: docker network inspect addons-747503
	W0420 00:46:31.845135 1644261 cli_runner.go:211] docker network inspect addons-747503 returned with exit code 1
	I0420 00:46:31.845171 1644261 network_create.go:284] error running [docker network inspect addons-747503]: docker network inspect addons-747503: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-747503 not found
	I0420 00:46:31.845184 1644261 network_create.go:286] output of [docker network inspect addons-747503]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-747503 not found
	
	** /stderr **
	I0420 00:46:31.845292 1644261 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0420 00:46:31.858385 1644261 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024d75e0}
	I0420 00:46:31.858427 1644261 network_create.go:124] attempt to create docker network addons-747503 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0420 00:46:31.858487 1644261 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-747503 addons-747503
	I0420 00:46:31.918669 1644261 network_create.go:108] docker network addons-747503 192.168.49.0/24 created
	I0420 00:46:31.918704 1644261 kic.go:121] calculated static IP "192.168.49.2" for the "addons-747503" container
	I0420 00:46:31.918779 1644261 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0420 00:46:31.932121 1644261 cli_runner.go:164] Run: docker volume create addons-747503 --label name.minikube.sigs.k8s.io=addons-747503 --label created_by.minikube.sigs.k8s.io=true
	I0420 00:46:31.946137 1644261 oci.go:103] Successfully created a docker volume addons-747503
	I0420 00:46:31.946230 1644261 cli_runner.go:164] Run: docker run --rm --name addons-747503-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747503 --entrypoint /usr/bin/test -v addons-747503:/var gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 -d /var/lib
	I0420 00:46:33.904376 1644261 cli_runner.go:217] Completed: docker run --rm --name addons-747503-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747503 --entrypoint /usr/bin/test -v addons-747503:/var gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 -d /var/lib: (1.958105111s)
	I0420 00:46:33.904409 1644261 oci.go:107] Successfully prepared a docker volume addons-747503
	I0420 00:46:33.904447 1644261 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:46:33.904466 1644261 kic.go:194] Starting extracting preloaded images to volume ...
	I0420 00:46:33.904548 1644261 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-747503:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 -I lz4 -xf /preloaded.tar -C /extractDir
	I0420 00:46:38.033459 1644261 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-747503:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 -I lz4 -xf /preloaded.tar -C /extractDir: (4.128855513s)
	I0420 00:46:38.033498 1644261 kic.go:203] duration metric: took 4.129027815s to extract preloaded images to volume ...
	W0420 00:46:38.033666 1644261 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0420 00:46:38.033783 1644261 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0420 00:46:38.092961 1644261 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-747503 --name addons-747503 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747503 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-747503 --network addons-747503 --ip 192.168.49.2 --volume addons-747503:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737
	I0420 00:46:38.431321 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Running}}
	I0420 00:46:38.449111 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:46:38.473100 1644261 cli_runner.go:164] Run: docker exec addons-747503 stat /var/lib/dpkg/alternatives/iptables
	I0420 00:46:38.539136 1644261 oci.go:144] the created container "addons-747503" has a running status.
	I0420 00:46:38.539177 1644261 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa...
	I0420 00:46:38.988697 1644261 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0420 00:46:39.013673 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:46:39.036196 1644261 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0420 00:46:39.036217 1644261 kic_runner.go:114] Args: [docker exec --privileged addons-747503 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0420 00:46:39.118596 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:46:39.142860 1644261 machine.go:94] provisionDockerMachine start ...
	I0420 00:46:39.142976 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:39.167812 1644261 main.go:141] libmachine: Using SSH client type: native
	I0420 00:46:39.168086 1644261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34675 <nil> <nil>}
	I0420 00:46:39.168096 1644261 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 00:46:39.349580 1644261 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-747503
	
	I0420 00:46:39.349601 1644261 ubuntu.go:169] provisioning hostname "addons-747503"
	I0420 00:46:39.349678 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:39.377796 1644261 main.go:141] libmachine: Using SSH client type: native
	I0420 00:46:39.378035 1644261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34675 <nil> <nil>}
	I0420 00:46:39.378046 1644261 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-747503 && echo "addons-747503" | sudo tee /etc/hostname
	I0420 00:46:39.558224 1644261 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-747503
	
	I0420 00:46:39.558419 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:39.575363 1644261 main.go:141] libmachine: Using SSH client type: native
	I0420 00:46:39.575600 1644261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34675 <nil> <nil>}
	I0420 00:46:39.575617 1644261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-747503' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-747503/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-747503' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 00:46:39.717750 1644261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:46:39.717780 1644261 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18703-1638187/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-1638187/.minikube}
	I0420 00:46:39.717798 1644261 ubuntu.go:177] setting up certificates
	I0420 00:46:39.717807 1644261 provision.go:84] configureAuth start
	I0420 00:46:39.717871 1644261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747503
	I0420 00:46:39.734066 1644261 provision.go:143] copyHostCerts
	I0420 00:46:39.734147 1644261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem (1082 bytes)
	I0420 00:46:39.734277 1644261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem (1123 bytes)
	I0420 00:46:39.734339 1644261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem (1675 bytes)
	I0420 00:46:39.734390 1644261 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem org=jenkins.addons-747503 san=[127.0.0.1 192.168.49.2 addons-747503 localhost minikube]
	I0420 00:46:40.231219 1644261 provision.go:177] copyRemoteCerts
	I0420 00:46:40.231290 1644261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 00:46:40.231331 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.247276 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:40.346662 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 00:46:40.371651 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0420 00:46:40.396149 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0420 00:46:40.421133 1644261 provision.go:87] duration metric: took 703.312596ms to configureAuth
	I0420 00:46:40.421162 1644261 ubuntu.go:193] setting minikube options for container-runtime
	I0420 00:46:40.421357 1644261 config.go:182] Loaded profile config "addons-747503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:46:40.421463 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.436686 1644261 main.go:141] libmachine: Using SSH client type: native
	I0420 00:46:40.436931 1644261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34675 <nil> <nil>}
	I0420 00:46:40.436947 1644261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 00:46:40.681193 1644261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 00:46:40.681219 1644261 machine.go:97] duration metric: took 1.538331373s to provisionDockerMachine
	I0420 00:46:40.681230 1644261 client.go:171] duration metric: took 9.358739082s to LocalClient.Create
	I0420 00:46:40.681274 1644261 start.go:167] duration metric: took 9.358813131s to libmachine.API.Create "addons-747503"
	I0420 00:46:40.681289 1644261 start.go:293] postStartSetup for "addons-747503" (driver="docker")
	I0420 00:46:40.681301 1644261 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 00:46:40.681386 1644261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 00:46:40.681463 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.698546 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:40.802764 1644261 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 00:46:40.805936 1644261 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0420 00:46:40.805975 1644261 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0420 00:46:40.806008 1644261 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0420 00:46:40.806022 1644261 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0420 00:46:40.806034 1644261 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-1638187/.minikube/addons for local assets ...
	I0420 00:46:40.806115 1644261 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-1638187/.minikube/files for local assets ...
	I0420 00:46:40.806144 1644261 start.go:296] duration metric: took 124.848597ms for postStartSetup
	I0420 00:46:40.806464 1644261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747503
	I0420 00:46:40.821587 1644261 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/config.json ...
	I0420 00:46:40.821882 1644261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:46:40.821936 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.835949 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:40.934325 1644261 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0420 00:46:40.938838 1644261 start.go:128] duration metric: took 9.618867781s to createHost
	I0420 00:46:40.938860 1644261 start.go:83] releasing machines lock for "addons-747503", held for 9.61901377s
	I0420 00:46:40.938948 1644261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747503
	I0420 00:46:40.954767 1644261 ssh_runner.go:195] Run: cat /version.json
	I0420 00:46:40.954809 1644261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 00:46:40.954838 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.954856 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.973750 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:40.987873 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:41.073383 1644261 ssh_runner.go:195] Run: systemctl --version
	I0420 00:46:41.192077 1644261 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 00:46:41.344667 1644261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0420 00:46:41.349255 1644261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:46:41.370360 1644261 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0420 00:46:41.370464 1644261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:46:41.403068 1644261 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0420 00:46:41.403146 1644261 start.go:494] detecting cgroup driver to use...
	I0420 00:46:41.403194 1644261 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0420 00:46:41.403271 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 00:46:41.419319 1644261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 00:46:41.431512 1644261 docker.go:217] disabling cri-docker service (if available) ...
	I0420 00:46:41.431608 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 00:46:41.446179 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 00:46:41.465996 1644261 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 00:46:41.554380 1644261 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 00:46:41.655130 1644261 docker.go:233] disabling docker service ...
	I0420 00:46:41.655197 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 00:46:41.675820 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 00:46:41.688324 1644261 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 00:46:41.772551 1644261 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 00:46:41.869236 1644261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 00:46:41.880923 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 00:46:41.897306 1644261 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 00:46:41.897393 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.908466 1644261 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 00:46:41.908556 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.919831 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.930232 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.940033 1644261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 00:46:41.949454 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.959319 1644261 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.974839 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.984469 1644261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 00:46:41.993979 1644261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 00:46:42.008022 1644261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:46:42.111879 1644261 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 00:46:42.238392 1644261 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 00:46:42.238485 1644261 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 00:46:42.242714 1644261 start.go:562] Will wait 60s for crictl version
	I0420 00:46:42.242782 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:46:42.246739 1644261 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 00:46:42.289378 1644261 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0420 00:46:42.289488 1644261 ssh_runner.go:195] Run: crio --version
	I0420 00:46:42.333568 1644261 ssh_runner.go:195] Run: crio --version
	I0420 00:46:42.377897 1644261 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0420 00:46:42.379595 1644261 cli_runner.go:164] Run: docker network inspect addons-747503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0420 00:46:42.392523 1644261 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0420 00:46:42.396287 1644261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:46:42.406719 1644261 kubeadm.go:877] updating cluster {Name:addons-747503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 00:46:42.406844 1644261 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:46:42.406909 1644261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:46:42.492542 1644261 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:46:42.492568 1644261 crio.go:433] Images already preloaded, skipping extraction
	I0420 00:46:42.492648 1644261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:46:42.532591 1644261 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:46:42.532618 1644261 cache_images.go:84] Images are preloaded, skipping loading
	I0420 00:46:42.532628 1644261 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0 crio true true} ...
	I0420 00:46:42.532741 1644261 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-747503 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 00:46:42.532824 1644261 ssh_runner.go:195] Run: crio config
	I0420 00:46:42.580609 1644261 cni.go:84] Creating CNI manager for ""
	I0420 00:46:42.580639 1644261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0420 00:46:42.580660 1644261 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 00:46:42.580718 1644261 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-747503 NodeName:addons-747503 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 00:46:42.580886 1644261 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-747503"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 00:46:42.580966 1644261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 00:46:42.590117 1644261 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 00:46:42.590190 1644261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 00:46:42.599044 1644261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0420 00:46:42.617636 1644261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 00:46:42.635779 1644261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0420 00:46:42.653757 1644261 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0420 00:46:42.657403 1644261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:46:42.668479 1644261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:46:42.748825 1644261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:46:42.762791 1644261 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503 for IP: 192.168.49.2
	I0420 00:46:42.762861 1644261 certs.go:194] generating shared ca certs ...
	I0420 00:46:42.762893 1644261 certs.go:226] acquiring lock for ca certs: {Name:mkf02d2bd3e0f29e12b7cec7c5b9a48566830288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:42.763075 1644261 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key
	I0420 00:46:42.952911 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt ...
	I0420 00:46:42.952946 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt: {Name:mk49370c70b4ffc1cbcd1227f487de3de2af3ed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:42.953182 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key ...
	I0420 00:46:42.953200 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key: {Name:mk2877a201a5ba28e426f127f32ae06fa0033f63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:42.953299 1644261 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key
	I0420 00:46:43.525747 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt ...
	I0420 00:46:43.525778 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt: {Name:mk695cd51a6cd9c3c06377fb3cd1872da426efc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:43.527292 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key ...
	I0420 00:46:43.527309 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key: {Name:mkef065e7c04a8c6100720cceafeab1ff9cb96b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:43.527942 1644261 certs.go:256] generating profile certs ...
	I0420 00:46:43.528022 1644261 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.key
	I0420 00:46:43.528041 1644261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt with IP's: []
	I0420 00:46:43.960821 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt ...
	I0420 00:46:43.960852 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: {Name:mk84a033ba366df9ffa0dfef7e831bb3e5c0f737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:43.961043 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.key ...
	I0420 00:46:43.961056 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.key: {Name:mk83bfd7e187e91bdb04631dbc1011de4d92fc28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:43.961606 1644261 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key.e2a49c09
	I0420 00:46:43.961631 1644261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt.e2a49c09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0420 00:46:44.377939 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt.e2a49c09 ...
	I0420 00:46:44.377977 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt.e2a49c09: {Name:mk0a88b731f275f786bbac6d601f7f9fda080c92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:44.378572 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key.e2a49c09 ...
	I0420 00:46:44.378591 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key.e2a49c09: {Name:mkd4e59169d95ea0e222dd2e9bcaa9e7684c6506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:44.379246 1644261 certs.go:381] copying /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt.e2a49c09 -> /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt
	I0420 00:46:44.379343 1644261 certs.go:385] copying /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key.e2a49c09 -> /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key
	I0420 00:46:44.379402 1644261 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.key
	I0420 00:46:44.379425 1644261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.crt with IP's: []
	I0420 00:46:45.155458 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.crt ...
	I0420 00:46:45.155496 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.crt: {Name:mk297ed885f196ef52980a6bcd4c4dd306202aca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:45.155722 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.key ...
	I0420 00:46:45.155739 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.key: {Name:mk1a0c4c69f4e1c4e307aafc0f32c462980fe679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:45.155970 1644261 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem (1679 bytes)
	I0420 00:46:45.156033 1644261 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem (1082 bytes)
	I0420 00:46:45.156076 1644261 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem (1123 bytes)
	I0420 00:46:45.156120 1644261 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem (1675 bytes)
	I0420 00:46:45.156827 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 00:46:45.185776 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 00:46:45.215921 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 00:46:45.246659 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0420 00:46:45.276336 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0420 00:46:45.302931 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 00:46:45.330184 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 00:46:45.355925 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 00:46:45.380042 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 00:46:45.404816 1644261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 00:46:45.422615 1644261 ssh_runner.go:195] Run: openssl version
	I0420 00:46:45.427939 1644261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 00:46:45.437580 1644261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:46:45.441275 1644261 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:46 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:46:45.441378 1644261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:46:45.448324 1644261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 00:46:45.457860 1644261 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:46:45.461194 1644261 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 00:46:45.461309 1644261 kubeadm.go:391] StartCluster: {Name:addons-747503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:46:45.461403 1644261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 00:46:45.461467 1644261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 00:46:45.503476 1644261 cri.go:89] found id: ""
	I0420 00:46:45.503547 1644261 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0420 00:46:45.512391 1644261 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 00:46:45.521198 1644261 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0420 00:46:45.521290 1644261 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 00:46:45.530277 1644261 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 00:46:45.530297 1644261 kubeadm.go:156] found existing configuration files:
	
	I0420 00:46:45.530357 1644261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 00:46:45.539187 1644261 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 00:46:45.539295 1644261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 00:46:45.547666 1644261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 00:46:45.556291 1644261 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 00:46:45.556360 1644261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 00:46:45.564678 1644261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 00:46:45.573450 1644261 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 00:46:45.573517 1644261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 00:46:45.582508 1644261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 00:46:45.591500 1644261 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 00:46:45.591577 1644261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 00:46:45.600866 1644261 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0420 00:46:45.645453 1644261 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 00:46:45.645777 1644261 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 00:46:45.683542 1644261 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0420 00:46:45.683657 1644261 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1058-aws
	I0420 00:46:45.683722 1644261 kubeadm.go:309] OS: Linux
	I0420 00:46:45.683789 1644261 kubeadm.go:309] CGROUPS_CPU: enabled
	I0420 00:46:45.683865 1644261 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0420 00:46:45.683931 1644261 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0420 00:46:45.684007 1644261 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0420 00:46:45.684075 1644261 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0420 00:46:45.684148 1644261 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0420 00:46:45.684214 1644261 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0420 00:46:45.684312 1644261 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0420 00:46:45.684387 1644261 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0420 00:46:45.759423 1644261 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 00:46:45.759626 1644261 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 00:46:45.759767 1644261 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 00:46:46.002291 1644261 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 00:46:46.007243 1644261 out.go:204]   - Generating certificates and keys ...
	I0420 00:46:46.007483 1644261 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 00:46:46.007612 1644261 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 00:46:46.884914 1644261 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0420 00:46:47.257057 1644261 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0420 00:46:47.525713 1644261 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0420 00:46:48.004926 1644261 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0420 00:46:48.760658 1644261 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0420 00:46:48.761028 1644261 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-747503 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0420 00:46:49.351744 1644261 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0420 00:46:49.352075 1644261 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-747503 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0420 00:46:50.201612 1644261 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0420 00:46:51.561008 1644261 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0420 00:46:51.893672 1644261 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0420 00:46:51.893960 1644261 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 00:46:52.391610 1644261 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 00:46:52.832785 1644261 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 00:46:53.450795 1644261 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 00:46:54.163371 1644261 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 00:46:54.525910 1644261 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 00:46:54.526499 1644261 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 00:46:54.530099 1644261 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 00:46:54.533698 1644261 out.go:204]   - Booting up control plane ...
	I0420 00:46:54.533810 1644261 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 00:46:54.533896 1644261 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 00:46:54.534330 1644261 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 00:46:54.544911 1644261 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 00:46:54.545769 1644261 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 00:46:54.545990 1644261 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 00:46:54.640341 1644261 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 00:46:54.640434 1644261 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 00:46:56.142567 1644261 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.502003018s
	I0420 00:46:56.142654 1644261 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 00:47:01.644242 1644261 kubeadm.go:309] [api-check] The API server is healthy after 5.501943699s
	I0420 00:47:01.664659 1644261 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 00:47:01.681300 1644261 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 00:47:01.708476 1644261 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 00:47:01.708705 1644261 kubeadm.go:309] [mark-control-plane] Marking the node addons-747503 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 00:47:01.721036 1644261 kubeadm.go:309] [bootstrap-token] Using token: gydxtq.1vtpvmdo173k1bfx
	I0420 00:47:01.723573 1644261 out.go:204]   - Configuring RBAC rules ...
	I0420 00:47:01.723699 1644261 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 00:47:01.728901 1644261 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 00:47:01.737904 1644261 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 00:47:01.741657 1644261 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 00:47:01.745445 1644261 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 00:47:01.750064 1644261 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 00:47:02.051404 1644261 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 00:47:02.494567 1644261 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 00:47:03.051115 1644261 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 00:47:03.052415 1644261 kubeadm.go:309] 
	I0420 00:47:03.052490 1644261 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 00:47:03.052501 1644261 kubeadm.go:309] 
	I0420 00:47:03.052583 1644261 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 00:47:03.052596 1644261 kubeadm.go:309] 
	I0420 00:47:03.052621 1644261 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 00:47:03.052682 1644261 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 00:47:03.052735 1644261 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 00:47:03.052744 1644261 kubeadm.go:309] 
	I0420 00:47:03.052796 1644261 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 00:47:03.052805 1644261 kubeadm.go:309] 
	I0420 00:47:03.052851 1644261 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 00:47:03.052861 1644261 kubeadm.go:309] 
	I0420 00:47:03.052912 1644261 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 00:47:03.052987 1644261 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 00:47:03.053062 1644261 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 00:47:03.053074 1644261 kubeadm.go:309] 
	I0420 00:47:03.053155 1644261 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 00:47:03.053232 1644261 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 00:47:03.053241 1644261 kubeadm.go:309] 
	I0420 00:47:03.053322 1644261 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token gydxtq.1vtpvmdo173k1bfx \
	I0420 00:47:03.053425 1644261 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9c904917a7f9caa355a71a4c03ca34b03d28761d5d47f15de292975c6da7288d \
	I0420 00:47:03.053449 1644261 kubeadm.go:309] 	--control-plane 
	I0420 00:47:03.053475 1644261 kubeadm.go:309] 
	I0420 00:47:03.053587 1644261 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 00:47:03.053596 1644261 kubeadm.go:309] 
	I0420 00:47:03.053675 1644261 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token gydxtq.1vtpvmdo173k1bfx \
	I0420 00:47:03.053777 1644261 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9c904917a7f9caa355a71a4c03ca34b03d28761d5d47f15de292975c6da7288d 
	I0420 00:47:03.056801 1644261 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1058-aws\n", err: exit status 1
	I0420 00:47:03.056915 1644261 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 00:47:03.056944 1644261 cni.go:84] Creating CNI manager for ""
	I0420 00:47:03.056957 1644261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0420 00:47:03.060887 1644261 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0420 00:47:03.063358 1644261 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0420 00:47:03.067117 1644261 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0420 00:47:03.067135 1644261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0420 00:47:03.086237 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0420 00:47:03.390022 1644261 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 00:47:03.390173 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:03.390313 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-747503 minikube.k8s.io/updated_at=2024_04_20T00_47_03_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=addons-747503 minikube.k8s.io/primary=true
	I0420 00:47:03.585439 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:03.585498 1644261 ops.go:34] apiserver oom_adj: -16
	I0420 00:47:04.085626 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:04.586529 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:05.086557 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:05.585699 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:06.085654 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:06.585771 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:07.085576 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:07.586155 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:08.086096 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:08.585649 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:09.086504 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:09.585610 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:10.085668 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:10.586519 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:11.086404 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:11.586239 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:12.085669 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:12.586287 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:13.086534 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:13.586536 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:14.085702 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:14.586138 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:15.085794 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:15.586433 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:16.085650 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:16.179804 1644261 kubeadm.go:1107] duration metric: took 12.78970109s to wait for elevateKubeSystemPrivileges
	W0420 00:47:16.179838 1644261 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 00:47:16.179846 1644261 kubeadm.go:393] duration metric: took 30.718541399s to StartCluster
	I0420 00:47:16.179861 1644261 settings.go:142] acquiring lock: {Name:mk38dc124731a3de0f512758a89f5557db305d6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:47:16.180388 1644261 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 00:47:16.180815 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/kubeconfig: {Name:mk33979dc7705003abaa608c8031c04a91a05c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:47:16.181428 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0420 00:47:16.181453 1644261 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:47:16.183715 1644261 out.go:177] * Verifying Kubernetes components...
	I0420 00:47:16.181718 1644261 config.go:182] Loaded profile config "addons-747503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:47:16.181730 1644261 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0420 00:47:16.185704 1644261 addons.go:69] Setting yakd=true in profile "addons-747503"
	I0420 00:47:16.185734 1644261 addons.go:234] Setting addon yakd=true in "addons-747503"
	I0420 00:47:16.185768 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.186268 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.186447 1644261 addons.go:69] Setting ingress-dns=true in profile "addons-747503"
	I0420 00:47:16.186469 1644261 addons.go:234] Setting addon ingress-dns=true in "addons-747503"
	I0420 00:47:16.186517 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.186921 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.187248 1644261 addons.go:69] Setting inspektor-gadget=true in profile "addons-747503"
	I0420 00:47:16.187275 1644261 addons.go:234] Setting addon inspektor-gadget=true in "addons-747503"
	I0420 00:47:16.187315 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.187697 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.187894 1644261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:47:16.188126 1644261 addons.go:69] Setting cloud-spanner=true in profile "addons-747503"
	I0420 00:47:16.188155 1644261 addons.go:234] Setting addon cloud-spanner=true in "addons-747503"
	I0420 00:47:16.188175 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.188546 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.191365 1644261 addons.go:69] Setting metrics-server=true in profile "addons-747503"
	I0420 00:47:16.191404 1644261 addons.go:234] Setting addon metrics-server=true in "addons-747503"
	I0420 00:47:16.191442 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.191858 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.195564 1644261 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-747503"
	I0420 00:47:16.195639 1644261 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-747503"
	I0420 00:47:16.195677 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.196140 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.196393 1644261 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-747503"
	I0420 00:47:16.196425 1644261 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-747503"
	I0420 00:47:16.196458 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.196858 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.212712 1644261 addons.go:69] Setting registry=true in profile "addons-747503"
	I0420 00:47:16.212761 1644261 addons.go:234] Setting addon registry=true in "addons-747503"
	I0420 00:47:16.212800 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.213262 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.213595 1644261 addons.go:69] Setting default-storageclass=true in profile "addons-747503"
	I0420 00:47:16.213636 1644261 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-747503"
	I0420 00:47:16.213917 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.229059 1644261 addons.go:69] Setting storage-provisioner=true in profile "addons-747503"
	I0420 00:47:16.229107 1644261 addons.go:234] Setting addon storage-provisioner=true in "addons-747503"
	I0420 00:47:16.229144 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.229674 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.238837 1644261 addons.go:69] Setting gcp-auth=true in profile "addons-747503"
	I0420 00:47:16.238899 1644261 mustload.go:65] Loading cluster: addons-747503
	I0420 00:47:16.239098 1644261 config.go:182] Loaded profile config "addons-747503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:47:16.239349 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.247529 1644261 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-747503"
	I0420 00:47:16.247581 1644261 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-747503"
	I0420 00:47:16.247904 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.260923 1644261 addons.go:69] Setting ingress=true in profile "addons-747503"
	I0420 00:47:16.261022 1644261 addons.go:234] Setting addon ingress=true in "addons-747503"
	I0420 00:47:16.261118 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.261627 1644261 addons.go:69] Setting volumesnapshots=true in profile "addons-747503"
	I0420 00:47:16.261657 1644261 addons.go:234] Setting addon volumesnapshots=true in "addons-747503"
	I0420 00:47:16.261681 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.262076 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.269352 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.380646 1644261 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0420 00:47:16.389090 1644261 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0420 00:47:16.389162 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0420 00:47:16.389257 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.396585 1644261 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0420 00:47:16.413756 1644261 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0420 00:47:16.415782 1644261 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 00:47:16.415807 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 00:47:16.415883 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.413893 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0420 00:47:16.401466 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.403036 1644261 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-747503"
	I0420 00:47:16.418273 1644261 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 00:47:16.418280 1644261 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0420 00:47:16.418293 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0420 00:47:16.420636 1644261 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0420 00:47:16.420642 1644261 out.go:177]   - Using image docker.io/registry:2.8.3
	I0420 00:47:16.420647 1644261 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0420 00:47:16.425004 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0420 00:47:16.425041 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.426776 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.426994 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.441701 1644261 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0420 00:47:16.441728 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0420 00:47:16.441815 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.442302 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0420 00:47:16.445183 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0420 00:47:16.442535 1644261 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0420 00:47:16.443701 1644261 addons.go:234] Setting addon default-storageclass=true in "addons-747503"
	I0420 00:47:16.448904 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0420 00:47:16.448990 1644261 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 00:47:16.449022 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.451018 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.451164 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0420 00:47:16.453010 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0420 00:47:16.451430 1644261 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0420 00:47:16.451490 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0420 00:47:16.451506 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0420 00:47:16.451526 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 00:47:16.455094 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0420 00:47:16.456879 1644261 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0420 00:47:16.456900 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0420 00:47:16.456978 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.458542 1644261 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0420 00:47:16.458614 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.474496 1644261 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0420 00:47:16.474512 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0420 00:47:16.474577 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.477483 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.461983 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0420 00:47:16.500835 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0420 00:47:16.502674 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0420 00:47:16.504324 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0420 00:47:16.506235 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0420 00:47:16.506258 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0420 00:47:16.506328 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.517701 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.462052 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.601782 1644261 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0420 00:47:16.609614 1644261 out.go:177]   - Using image docker.io/busybox:stable
	I0420 00:47:16.605998 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.601684 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.609993 1644261 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0420 00:47:16.611424 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0420 00:47:16.611499 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.620520 1644261 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0420 00:47:16.617584 1644261 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 00:47:16.624839 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 00:47:16.624942 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.638699 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.639612 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.641927 1644261 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0420 00:47:16.641980 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0420 00:47:16.642067 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.668331 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0420 00:47:16.668981 1644261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:47:16.669308 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.673621 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.676969 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.677828 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.699192 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.730795 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.732606 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.742311 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.780716 1644261 node_ready.go:35] waiting up to 6m0s for node "addons-747503" to be "Ready" ...
	I0420 00:47:16.921085 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0420 00:47:16.921116 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0420 00:47:16.991932 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0420 00:47:16.996333 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0420 00:47:17.100495 1644261 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 00:47:17.100519 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0420 00:47:17.112923 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0420 00:47:17.112950 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0420 00:47:17.120377 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0420 00:47:17.120403 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0420 00:47:17.189815 1644261 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0420 00:47:17.189844 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0420 00:47:17.207623 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0420 00:47:17.210877 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0420 00:47:17.219064 1644261 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0420 00:47:17.219098 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0420 00:47:17.227561 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0420 00:47:17.227588 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0420 00:47:17.268204 1644261 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 00:47:17.268232 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 00:47:17.272800 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0420 00:47:17.272833 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0420 00:47:17.275286 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 00:47:17.303754 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 00:47:17.334811 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0420 00:47:17.334879 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0420 00:47:17.341610 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0420 00:47:17.394607 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0420 00:47:17.394678 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0420 00:47:17.407225 1644261 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0420 00:47:17.407292 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0420 00:47:17.411478 1644261 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0420 00:47:17.411505 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0420 00:47:17.412384 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0420 00:47:17.412410 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0420 00:47:17.459411 1644261 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 00:47:17.459482 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 00:47:17.520995 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0420 00:47:17.521029 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0420 00:47:17.562920 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0420 00:47:17.570793 1644261 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0420 00:47:17.570820 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0420 00:47:17.628250 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0420 00:47:17.628287 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0420 00:47:17.635304 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 00:47:17.675653 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0420 00:47:17.675685 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0420 00:47:17.677655 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0420 00:47:17.692234 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0420 00:47:17.692263 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0420 00:47:17.785863 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0420 00:47:17.785891 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0420 00:47:17.790678 1644261 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0420 00:47:17.790712 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0420 00:47:17.836414 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0420 00:47:17.836445 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0420 00:47:17.897210 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0420 00:47:17.956384 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0420 00:47:17.956418 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0420 00:47:17.972928 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0420 00:47:17.972958 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0420 00:47:18.058087 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0420 00:47:18.058115 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0420 00:47:18.122175 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0420 00:47:18.122210 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0420 00:47:18.196857 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0420 00:47:18.196898 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0420 00:47:18.223649 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0420 00:47:18.223683 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0420 00:47:18.308893 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0420 00:47:18.308918 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0420 00:47:18.317127 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0420 00:47:18.431462 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0420 00:47:18.431507 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0420 00:47:18.590897 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0420 00:47:18.976068 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:20.096220 1644261 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.427813593s)
	I0420 00:47:20.096422 1644261 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0420 00:47:20.703392 1644261 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-747503" context rescaled to 1 replicas
	I0420 00:47:21.021961 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.02998807s)
	I0420 00:47:21.316695 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:22.188682 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.192309833s)
	I0420 00:47:22.188774 1644261 addons.go:470] Verifying addon ingress=true in "addons-747503"
	I0420 00:47:22.192225 1644261 out.go:177] * Verifying ingress addon...
	I0420 00:47:22.188943 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.981293644s)
	I0420 00:47:22.189126 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.978079286s)
	I0420 00:47:22.189179 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.91383171s)
	I0420 00:47:22.189212 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.88538872s)
	I0420 00:47:22.189275 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.847605776s)
	I0420 00:47:22.189361 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.554030452s)
	I0420 00:47:22.189389 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.51171099s)
	I0420 00:47:22.189421 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.626351674s)
	I0420 00:47:22.192770 1644261 addons.go:470] Verifying addon metrics-server=true in "addons-747503"
	I0420 00:47:22.192870 1644261 addons.go:470] Verifying addon registry=true in "addons-747503"
	I0420 00:47:22.196209 1644261 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0420 00:47:22.197884 1644261 out.go:177] * Verifying registry addon...
	I0420 00:47:22.200704 1644261 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0420 00:47:22.197983 1644261 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-747503 service yakd-dashboard -n yakd-dashboard
	
	I0420 00:47:22.211783 1644261 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0420 00:47:22.211886 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:22.214456 1644261 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0420 00:47:22.214527 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0420 00:47:22.238344 1644261 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0420 00:47:22.380821 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.063648234s)
	I0420 00:47:22.381083 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.483841163s)
	W0420 00:47:22.381139 1644261 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0420 00:47:22.381175 1644261 retry.go:31] will retry after 166.820915ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0420 00:47:22.548877 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0420 00:47:22.600574 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.009625669s)
	I0420 00:47:22.600671 1644261 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-747503"
	I0420 00:47:22.603229 1644261 out.go:177] * Verifying csi-hostpath-driver addon...
	I0420 00:47:22.606034 1644261 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0420 00:47:22.670601 1644261 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0420 00:47:22.670671 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:22.720534 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:22.725459 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:23.175823 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:23.228069 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:23.229636 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:23.610334 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:23.701866 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:23.705088 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:23.785324 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:24.112035 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:24.205349 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:24.210857 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:24.611934 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:24.703020 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:24.705704 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:25.111349 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:25.203668 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:25.206311 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:25.544549 1644261 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0420 00:47:25.544657 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:25.569718 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:25.611543 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:25.707373 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:25.711627 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:25.751589 1644261 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0420 00:47:25.791122 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:25.816447 1644261 addons.go:234] Setting addon gcp-auth=true in "addons-747503"
	I0420 00:47:25.816499 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:25.816957 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:25.845517 1644261 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0420 00:47:25.845586 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:25.877309 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.328324528s)
	I0420 00:47:25.877697 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:25.975676 1644261 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0420 00:47:25.978070 1644261 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0420 00:47:25.980578 1644261 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0420 00:47:25.980605 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0420 00:47:25.999059 1644261 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0420 00:47:25.999087 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0420 00:47:26.023351 1644261 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0420 00:47:26.023374 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0420 00:47:26.045389 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0420 00:47:26.110669 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:26.202148 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:26.205465 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:26.611994 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:26.731236 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:26.732300 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:26.789639 1644261 addons.go:470] Verifying addon gcp-auth=true in "addons-747503"
	I0420 00:47:26.792415 1644261 out.go:177] * Verifying gcp-auth addon...
	I0420 00:47:26.795034 1644261 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0420 00:47:26.801627 1644261 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0420 00:47:26.801647 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:27.111220 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:27.202244 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:27.206342 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:27.299355 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:27.611552 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:27.704193 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:27.706819 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:27.800094 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:28.112242 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:28.203543 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:28.205771 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:28.285015 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:28.299535 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:28.610656 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:28.701793 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:28.705035 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:28.799716 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:29.110925 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:29.202332 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:29.205716 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:29.298341 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:29.610673 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:29.701621 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:29.706306 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:29.798919 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:30.112640 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:30.204537 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:30.206706 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:30.299144 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:30.610522 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:30.702150 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:30.704677 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:30.784363 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:30.799016 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:31.110019 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:31.202111 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:31.205063 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:31.299204 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:31.611514 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:31.701803 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:31.704838 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:31.798703 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:32.111116 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:32.202000 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:32.204400 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:32.298928 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:32.611045 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:32.702323 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:32.705822 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:32.789058 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:32.798725 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:33.110424 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:33.204110 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:33.205044 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:33.298391 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:33.610022 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:33.701936 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:33.705447 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:33.798469 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:34.111023 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:34.201972 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:34.204989 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:34.298734 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:34.611132 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:34.702210 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:34.704568 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:34.798686 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:35.114336 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:35.201894 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:35.204405 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:35.284372 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:35.299112 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:35.610506 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:35.703095 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:35.704656 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:35.798675 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:36.111001 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:36.202000 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:36.205085 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:36.298568 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:36.610614 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:36.701867 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:36.705335 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:36.798308 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:37.110714 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:37.201367 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:37.205486 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:37.298446 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:37.610698 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:37.701750 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:37.703884 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:37.784367 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:37.798325 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:38.110748 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:38.201466 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:38.204655 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:38.298691 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:38.610939 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:38.702667 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:38.706391 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:38.798284 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:39.110263 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:39.202152 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:39.205507 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:39.298283 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:39.610642 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:39.701439 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:39.704765 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:39.784473 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:39.798580 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:40.111665 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:40.201902 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:40.204092 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:40.298312 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:40.611241 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:40.701932 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:40.704592 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:40.798190 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:41.110629 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:41.202188 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:41.204072 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:41.298945 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:41.611166 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:41.702282 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:41.704810 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:41.784563 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:41.798789 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:42.110754 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:42.202995 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:42.205235 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:42.299222 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:42.611146 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:42.702723 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:42.705024 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:42.798378 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:43.110641 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:43.202122 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:43.204551 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:43.299823 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:43.610371 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:43.702371 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:43.705119 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:43.798537 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:44.110802 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:44.201908 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:44.204050 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:44.283628 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:44.299022 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:44.611069 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:44.702139 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:44.704447 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:44.798400 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:45.110913 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:45.203669 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:45.207125 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:45.299652 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:45.611596 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:45.702314 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:45.704145 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:45.798613 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:46.111305 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:46.202398 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:46.207349 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:46.284676 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:46.299304 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:46.610152 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:46.701326 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:46.704270 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:46.798790 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:47.110635 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:47.201792 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:47.203647 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:47.298677 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:47.610877 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:47.701753 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:47.704933 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:47.798847 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:48.111425 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:48.201251 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:48.204521 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:48.298577 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:48.615418 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:48.703331 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:48.707352 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:48.784673 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:48.800104 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:49.112682 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:49.201597 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:49.204732 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:49.298628 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:49.610976 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:49.701739 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:49.705321 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:49.798577 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:50.111739 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:50.201941 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:50.203737 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:50.299042 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:50.610954 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:50.709155 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:50.709916 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:50.799587 1644261 node_ready.go:49] node "addons-747503" has status "Ready":"True"
	I0420 00:47:50.799614 1644261 node_ready.go:38] duration metric: took 34.018855397s for node "addons-747503" to be "Ready" ...
	I0420 00:47:50.799624 1644261 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:47:50.839199 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:50.842280 1644261 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pj8wd" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.121354 1644261 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0420 00:47:51.121385 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:51.265128 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:51.306316 1644261 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0420 00:47:51.306343 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:51.316679 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:51.646936 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:51.738236 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:51.738864 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:51.825253 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:51.869885 1644261 pod_ready.go:92] pod "coredns-7db6d8ff4d-pj8wd" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.869905 1644261 pod_ready.go:81] duration metric: took 1.02759912s for pod "coredns-7db6d8ff4d-pj8wd" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.869936 1644261 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.880443 1644261 pod_ready.go:92] pod "etcd-addons-747503" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.880468 1644261 pod_ready.go:81] duration metric: took 10.523706ms for pod "etcd-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.880483 1644261 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.893210 1644261 pod_ready.go:92] pod "kube-apiserver-addons-747503" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.893237 1644261 pod_ready.go:81] duration metric: took 12.745711ms for pod "kube-apiserver-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.893253 1644261 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.902837 1644261 pod_ready.go:92] pod "kube-controller-manager-addons-747503" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.902861 1644261 pod_ready.go:81] duration metric: took 9.600699ms for pod "kube-controller-manager-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.902876 1644261 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cmk9r" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.984300 1644261 pod_ready.go:92] pod "kube-proxy-cmk9r" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.984328 1644261 pod_ready.go:81] duration metric: took 81.441699ms for pod "kube-proxy-cmk9r" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.984340 1644261 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:52.112853 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:52.203480 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:52.206627 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:52.298995 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:52.385428 1644261 pod_ready.go:92] pod "kube-scheduler-addons-747503" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:52.385502 1644261 pod_ready.go:81] duration metric: took 401.135821ms for pod "kube-scheduler-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:52.385569 1644261 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:52.612694 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:52.702190 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:52.747764 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:52.816322 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:53.112453 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:53.204654 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:53.207621 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:53.300011 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:53.611628 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:53.705044 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:53.707972 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:53.798494 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:54.114108 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:54.205753 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:54.208644 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:54.299729 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:54.393624 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:47:54.614619 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:54.705416 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:54.734978 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:54.802347 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:55.114619 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:55.207471 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:55.207745 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:55.298943 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:55.613443 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:55.703721 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:55.711000 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:55.800030 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:56.112264 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:56.202387 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:56.205588 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:56.298472 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:56.611287 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:56.702954 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:56.706206 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:56.806862 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:56.892905 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:47:57.111783 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:57.202483 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:57.206930 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:57.298660 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:57.614099 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:57.715434 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:57.716689 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:57.799161 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:58.111940 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:58.202592 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:58.205903 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:58.299398 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:58.614025 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:58.703746 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:58.710610 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:58.800586 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:59.113393 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:59.210841 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:59.213028 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:59.300372 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:59.395893 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:47:59.624560 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:59.702174 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:59.706175 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:59.798856 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:00.144163 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:00.225477 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:00.229320 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:48:00.331039 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:00.612190 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:00.703620 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:00.707231 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:48:00.800642 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:01.114451 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:01.205638 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:01.213681 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:48:01.299675 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:01.612691 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:01.703427 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:01.705034 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:48:01.798680 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:01.892315 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:02.114119 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:02.204802 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:02.225935 1644261 kapi.go:107] duration metric: took 40.025228032s to wait for kubernetes.io/minikube-addons=registry ...
	I0420 00:48:02.312863 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:02.627694 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:02.703161 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:02.798728 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:03.113524 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:03.202842 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:03.299523 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:03.613428 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:03.703775 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:03.800788 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:03.895136 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:04.113370 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:04.202943 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:04.299410 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:04.613215 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:04.702731 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:04.799550 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:05.113042 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:05.202585 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:05.299047 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:05.614002 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:05.702680 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:05.802558 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:05.895296 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:06.114675 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:06.204549 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:06.302473 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:06.614112 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:06.703570 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:06.799260 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:07.113316 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:07.202565 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:07.298863 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:07.612018 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:07.703264 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:07.798648 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:08.112270 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:08.203042 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:08.299153 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:08.393303 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:08.613346 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:08.702765 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:08.799658 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:09.112127 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:09.203847 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:09.299447 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:09.614216 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:09.703657 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:09.800601 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:10.118707 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:10.202184 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:10.298516 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:10.393387 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:10.613200 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:10.703138 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:10.800550 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:11.137314 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:11.202576 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:11.299140 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:11.613141 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:11.703548 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:11.805116 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:12.111561 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:12.202457 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:12.299341 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:12.612160 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:12.702500 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:12.799184 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:12.892518 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:13.111952 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:13.202640 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:13.299417 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:13.612612 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:13.703214 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:13.799562 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:14.112092 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:14.202789 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:14.298940 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:14.612071 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:14.701930 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:14.798636 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:15.112017 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:15.202850 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:15.300071 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:15.396380 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:15.642371 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:15.703162 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:15.799216 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:16.113062 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:16.202577 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:16.298955 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:16.612867 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:16.702379 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:16.798754 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:17.111096 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:17.202268 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:17.298651 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:17.611502 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:17.702326 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:17.799117 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:17.891723 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:18.112154 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:18.203753 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:18.298844 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:18.611817 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:18.701804 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:18.799969 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:19.112834 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:19.202549 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:19.299356 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:19.612472 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:19.702362 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:19.800164 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:19.894073 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:20.112337 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:20.205154 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:20.299624 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:20.612270 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:20.702951 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:20.798601 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:21.111917 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:21.202373 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:21.299686 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:21.612723 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:21.702148 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:21.798629 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:22.112515 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:22.203367 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:22.302148 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:22.392640 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:22.612660 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:22.702810 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:22.798786 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:23.111919 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:23.201812 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:23.299108 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:23.611637 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:23.701775 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:23.799304 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:24.131787 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:24.255684 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:24.316635 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:24.418028 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:24.617598 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:24.702006 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:24.799016 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:25.112839 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:25.201904 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:25.298574 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:25.611863 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:25.702151 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:25.798626 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:26.112621 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:26.202033 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:26.298771 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:26.621173 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:26.704880 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:26.799455 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:26.892011 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:27.111595 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:27.202612 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:27.298946 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:27.612741 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:27.703062 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:27.799262 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:28.111989 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:28.205133 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:28.300116 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:28.617767 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:28.702926 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:28.798906 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:28.895433 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:29.115635 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:29.202330 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:29.305517 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:29.612221 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:29.715115 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:29.800816 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:30.113986 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:30.204176 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:30.300436 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:30.611466 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:30.702492 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:30.799210 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:31.132012 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:31.204558 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:31.299795 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:31.394405 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:31.611911 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:31.702249 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:31.798674 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:32.115252 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:32.204156 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:32.298833 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:32.612034 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:32.702349 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:32.798789 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:33.112130 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:33.202582 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:33.299170 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:33.612774 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:33.702834 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:33.800234 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:33.892097 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:34.112008 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:34.202297 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:34.298717 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:34.619328 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:34.703435 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:34.806019 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:35.120414 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:35.205464 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:35.300355 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:35.613650 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:35.703059 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:35.799691 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:35.894200 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:36.113606 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:36.202374 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:36.299444 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:36.612786 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:36.702747 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:36.799741 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:37.112124 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:37.202762 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:37.301973 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:37.612106 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:37.702390 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:37.820773 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:37.896709 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:38.112525 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:38.203055 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:38.298160 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:38.614559 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:38.702186 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:38.798806 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:39.113206 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:39.203142 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:39.302909 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:39.621741 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:39.702067 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:39.799389 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:40.113336 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:40.203042 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:40.298723 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:40.395135 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:40.612345 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:40.702488 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:40.799104 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:41.122104 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:41.202448 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:41.300486 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:41.612243 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:41.703549 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:41.799237 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:42.111985 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:42.203111 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:42.302465 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:42.612639 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:42.703714 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:42.799406 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:42.892695 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:43.112179 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:43.203272 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:43.298925 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:43.612258 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:43.702705 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:43.799390 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:44.115774 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:44.202051 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:44.298314 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:44.611987 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:44.702124 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:44.798541 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:45.112791 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:45.204493 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:45.299729 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:45.393729 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:45.612789 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:45.702486 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:45.799560 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:46.119540 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:46.202732 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:46.299293 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:46.611893 1644261 kapi.go:107] duration metric: took 1m24.005858121s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0420 00:48:46.702042 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:46.798447 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:47.202393 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:47.298773 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:47.701765 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:47.799351 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:47.892278 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:48.202626 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:48.299390 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:48.702100 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:48.799332 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:49.201889 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:49.299047 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:49.702415 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:49.799051 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:49.892790 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:50.202697 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:50.299292 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:50.702478 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:50.798784 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:51.202229 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:51.298707 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:51.703133 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:51.798709 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:52.202174 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:52.298480 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:52.391893 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:52.702258 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:52.798434 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:53.202557 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:53.298973 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:53.702469 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:53.798795 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:54.201914 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:54.299019 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:54.392210 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:54.702208 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:54.798675 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:55.201889 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:55.299247 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:55.703349 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:55.798800 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:56.201804 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:56.299211 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:56.398126 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:56.703544 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:56.801082 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:57.203554 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:57.300723 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:57.701744 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:57.799038 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:58.202294 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:58.298796 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:58.408975 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:58.703585 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:58.800895 1644261 kapi.go:107] duration metric: took 1m32.005859357s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0420 00:48:58.803430 1644261 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-747503 cluster.
	I0420 00:48:58.805977 1644261 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0420 00:48:58.808774 1644261 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0420 00:48:59.214348 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:59.702282 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:00.204506 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:00.703046 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:00.895496 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:49:01.210517 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:01.401710 1644261 pod_ready.go:92] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"True"
	I0420 00:49:01.401740 1644261 pod_ready.go:81] duration metric: took 1m9.016144355s for pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace to be "Ready" ...
	I0420 00:49:01.401759 1644261 pod_ready.go:38] duration metric: took 1m10.602112322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:49:01.401774 1644261 api_server.go:52] waiting for apiserver process to appear ...
	I0420 00:49:01.401809 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:49:01.401878 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:49:01.476056 1644261 cri.go:89] found id: "d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:01.476082 1644261 cri.go:89] found id: ""
	I0420 00:49:01.476091 1644261 logs.go:276] 1 containers: [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053]
	I0420 00:49:01.476157 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.482452 1644261 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:49:01.482549 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:49:01.545143 1644261 cri.go:89] found id: "dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:01.545169 1644261 cri.go:89] found id: ""
	I0420 00:49:01.545179 1644261 logs.go:276] 1 containers: [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330]
	I0420 00:49:01.545245 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.550669 1644261 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:49:01.550748 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:49:01.613640 1644261 cri.go:89] found id: "dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:01.613666 1644261 cri.go:89] found id: ""
	I0420 00:49:01.613678 1644261 logs.go:276] 1 containers: [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df]
	I0420 00:49:01.613749 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.619858 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:49:01.619944 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:49:01.677562 1644261 cri.go:89] found id: "efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:01.677589 1644261 cri.go:89] found id: ""
	I0420 00:49:01.677600 1644261 logs.go:276] 1 containers: [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929]
	I0420 00:49:01.677672 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.682732 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:49:01.682885 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:49:01.704238 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:01.772321 1644261 cri.go:89] found id: "8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:01.772392 1644261 cri.go:89] found id: ""
	I0420 00:49:01.772427 1644261 logs.go:276] 1 containers: [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20]
	I0420 00:49:01.772523 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.776830 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:49:01.776962 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:49:01.856325 1644261 cri.go:89] found id: "120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:01.856401 1644261 cri.go:89] found id: ""
	I0420 00:49:01.856433 1644261 logs.go:276] 1 containers: [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e]
	I0420 00:49:01.856549 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.861620 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:49:01.861776 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:49:01.928733 1644261 cri.go:89] found id: "b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:01.928808 1644261 cri.go:89] found id: ""
	I0420 00:49:01.928845 1644261 logs.go:276] 1 containers: [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52]
	I0420 00:49:01.928943 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.933261 1644261 logs.go:123] Gathering logs for dmesg ...
	I0420 00:49:01.933340 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:49:01.955010 1644261 logs.go:123] Gathering logs for kube-apiserver [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053] ...
	I0420 00:49:01.955091 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:02.037301 1644261 logs.go:123] Gathering logs for etcd [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330] ...
	I0420 00:49:02.037382 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:02.098944 1644261 logs.go:123] Gathering logs for kube-controller-manager [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e] ...
	I0420 00:49:02.098977 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:02.203698 1644261 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:49:02.203731 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:49:02.208871 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:02.335611 1644261 logs.go:123] Gathering logs for kubelet ...
	I0420 00:49:02.335713 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0420 00:49:02.411512 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.801309    1518 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.411792 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.801356    1518 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.412589 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815347    1518 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.412756 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815367    1518 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.413022 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.413229 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.413874 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820274    1518 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414080 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820315    1518 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414271 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820622    1518 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414479 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820646    1518 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414667 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.821047    1518 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414879 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.415678 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.416354 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.416560 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.416767 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:02.481101 1644261 logs.go:123] Gathering logs for coredns [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df] ...
	I0420 00:49:02.481147 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:02.545331 1644261 logs.go:123] Gathering logs for kube-scheduler [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929] ...
	I0420 00:49:02.545360 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:02.653396 1644261 logs.go:123] Gathering logs for kube-proxy [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20] ...
	I0420 00:49:02.653434 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:02.703574 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:02.718396 1644261 logs.go:123] Gathering logs for kindnet [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52] ...
	I0420 00:49:02.718473 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:02.784614 1644261 logs.go:123] Gathering logs for container status ...
	I0420 00:49:02.784642 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:49:02.862815 1644261 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:49:02.862918 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:49:03.154905 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:03.154978 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0420 00:49:03.155071 1644261 out.go:239] X Problems detected in kubelet:
	W0420 00:49:03.155116 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:03.155296 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:03.155332 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:03.155379 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:03.155413 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:03.155457 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:03.155482 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:49:03.203035 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:03.702362 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:04.203239 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:04.711009 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:05.203956 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:05.702494 1644261 kapi.go:107] duration metric: took 1m43.506280483s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0420 00:49:05.705025 1644261 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0420 00:49:05.707241 1644261 addons.go:505] duration metric: took 1m49.525505308s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0420 00:49:13.156219 1644261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:49:13.170608 1644261 api_server.go:72] duration metric: took 1m56.989122484s to wait for apiserver process to appear ...
	I0420 00:49:13.170636 1644261 api_server.go:88] waiting for apiserver healthz status ...
	I0420 00:49:13.170677 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:49:13.170743 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:49:13.215140 1644261 cri.go:89] found id: "d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:13.215162 1644261 cri.go:89] found id: ""
	I0420 00:49:13.215171 1644261 logs.go:276] 1 containers: [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053]
	I0420 00:49:13.215236 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.218892 1644261 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:49:13.218971 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:49:13.263654 1644261 cri.go:89] found id: "dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:13.263682 1644261 cri.go:89] found id: ""
	I0420 00:49:13.263691 1644261 logs.go:276] 1 containers: [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330]
	I0420 00:49:13.263764 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.267679 1644261 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:49:13.267768 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:49:13.309684 1644261 cri.go:89] found id: "dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:13.309708 1644261 cri.go:89] found id: ""
	I0420 00:49:13.309720 1644261 logs.go:276] 1 containers: [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df]
	I0420 00:49:13.309776 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.313423 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:49:13.313507 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:49:13.351369 1644261 cri.go:89] found id: "efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:13.351394 1644261 cri.go:89] found id: ""
	I0420 00:49:13.351403 1644261 logs.go:276] 1 containers: [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929]
	I0420 00:49:13.351459 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.358220 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:49:13.358301 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:49:13.402876 1644261 cri.go:89] found id: "8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:13.402901 1644261 cri.go:89] found id: ""
	I0420 00:49:13.402909 1644261 logs.go:276] 1 containers: [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20]
	I0420 00:49:13.402967 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.406557 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:49:13.406631 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:49:13.446459 1644261 cri.go:89] found id: "120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:13.446528 1644261 cri.go:89] found id: ""
	I0420 00:49:13.446542 1644261 logs.go:276] 1 containers: [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e]
	I0420 00:49:13.446602 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.450261 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:49:13.450351 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:49:13.490186 1644261 cri.go:89] found id: "b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:13.490224 1644261 cri.go:89] found id: ""
	I0420 00:49:13.490234 1644261 logs.go:276] 1 containers: [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52]
	I0420 00:49:13.490331 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.493880 1644261 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:49:13.493909 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:49:13.625695 1644261 logs.go:123] Gathering logs for kube-apiserver [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053] ...
	I0420 00:49:13.625770 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:13.692424 1644261 logs.go:123] Gathering logs for coredns [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df] ...
	I0420 00:49:13.692460 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:13.739447 1644261 logs.go:123] Gathering logs for kube-scheduler [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929] ...
	I0420 00:49:13.739479 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:13.783910 1644261 logs.go:123] Gathering logs for container status ...
	I0420 00:49:13.783946 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:49:13.846042 1644261 logs.go:123] Gathering logs for kubelet ...
	I0420 00:49:13.846079 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0420 00:49:13.886398 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.801309    1518 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.886620 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.801356    1518 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.887405 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815347    1518 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.887575 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815367    1518 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.887758 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.887960 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.888582 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820274    1518 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.888784 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820315    1518 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.888970 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820622    1518 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.889177 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820646    1518 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.889365 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.821047    1518 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.889580 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.890396 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.890599 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.890791 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.890996 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:13.938460 1644261 logs.go:123] Gathering logs for dmesg ...
	I0420 00:49:13.938494 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:49:13.965511 1644261 logs.go:123] Gathering logs for kube-controller-manager [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e] ...
	I0420 00:49:13.965647 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:14.058549 1644261 logs.go:123] Gathering logs for kindnet [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52] ...
	I0420 00:49:14.058589 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:14.107649 1644261 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:49:14.107678 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:49:14.200718 1644261 logs.go:123] Gathering logs for etcd [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330] ...
	I0420 00:49:14.200757 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:14.253776 1644261 logs.go:123] Gathering logs for kube-proxy [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20] ...
	I0420 00:49:14.253816 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:14.295736 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:14.295762 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0420 00:49:14.295814 1644261 out.go:239] X Problems detected in kubelet:
	W0420 00:49:14.295828 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:14.295836 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:14.295844 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:14.295852 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:14.295857 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:14.295871 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:14.295877 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:49:24.297174 1644261 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 00:49:24.304890 1644261 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0420 00:49:24.305901 1644261 api_server.go:141] control plane version: v1.30.0
	I0420 00:49:24.305926 1644261 api_server.go:131] duration metric: took 11.135283023s to wait for apiserver health ...
	I0420 00:49:24.305935 1644261 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 00:49:24.305957 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:49:24.306023 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:49:24.342719 1644261 cri.go:89] found id: "d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:24.342741 1644261 cri.go:89] found id: ""
	I0420 00:49:24.342749 1644261 logs.go:276] 1 containers: [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053]
	I0420 00:49:24.342812 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.346322 1644261 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:49:24.346394 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:49:24.390679 1644261 cri.go:89] found id: "dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:24.390702 1644261 cri.go:89] found id: ""
	I0420 00:49:24.390710 1644261 logs.go:276] 1 containers: [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330]
	I0420 00:49:24.390791 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.394567 1644261 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:49:24.394662 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:49:24.442284 1644261 cri.go:89] found id: "dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:24.442307 1644261 cri.go:89] found id: ""
	I0420 00:49:24.442315 1644261 logs.go:276] 1 containers: [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df]
	I0420 00:49:24.442382 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.446024 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:49:24.446108 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:49:24.484224 1644261 cri.go:89] found id: "efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:24.484248 1644261 cri.go:89] found id: ""
	I0420 00:49:24.484260 1644261 logs.go:276] 1 containers: [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929]
	I0420 00:49:24.484317 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.488065 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:49:24.488140 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:49:24.561054 1644261 cri.go:89] found id: "8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:24.561075 1644261 cri.go:89] found id: ""
	I0420 00:49:24.561085 1644261 logs.go:276] 1 containers: [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20]
	I0420 00:49:24.561141 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.564741 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:49:24.564860 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:49:24.605384 1644261 cri.go:89] found id: "120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:24.605444 1644261 cri.go:89] found id: ""
	I0420 00:49:24.605466 1644261 logs.go:276] 1 containers: [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e]
	I0420 00:49:24.605568 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.609475 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:49:24.610101 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:49:24.647409 1644261 cri.go:89] found id: "b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:24.647432 1644261 cri.go:89] found id: ""
	I0420 00:49:24.647441 1644261 logs.go:276] 1 containers: [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52]
	I0420 00:49:24.647516 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.650908 1644261 logs.go:123] Gathering logs for kubelet ...
	I0420 00:49:24.650933 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0420 00:49:24.687053 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.801309    1518 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.687296 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.801356    1518 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.688077 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815347    1518 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.688245 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815367    1518 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.688430 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.688630 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.689257 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820274    1518 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.689459 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820315    1518 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.689656 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820622    1518 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.689866 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820646    1518 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.690051 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.821047    1518 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.690258 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.691080 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.691285 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.691472 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.691679 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:24.740157 1644261 logs.go:123] Gathering logs for dmesg ...
	I0420 00:49:24.740187 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:49:24.760602 1644261 logs.go:123] Gathering logs for kube-apiserver [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053] ...
	I0420 00:49:24.760632 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:24.828968 1644261 logs.go:123] Gathering logs for etcd [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330] ...
	I0420 00:49:24.829007 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:24.876633 1644261 logs.go:123] Gathering logs for kindnet [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52] ...
	I0420 00:49:24.876671 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:24.922399 1644261 logs.go:123] Gathering logs for container status ...
	I0420 00:49:24.922431 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:49:24.969473 1644261 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:49:24.969505 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:49:25.149062 1644261 logs.go:123] Gathering logs for coredns [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df] ...
	I0420 00:49:25.149098 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:25.194458 1644261 logs.go:123] Gathering logs for kube-scheduler [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929] ...
	I0420 00:49:25.194489 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:25.247513 1644261 logs.go:123] Gathering logs for kube-proxy [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20] ...
	I0420 00:49:25.247547 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:25.283929 1644261 logs.go:123] Gathering logs for kube-controller-manager [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e] ...
	I0420 00:49:25.283956 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:25.350599 1644261 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:49:25.350633 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:49:25.466073 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:25.466105 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0420 00:49:25.466176 1644261 out.go:239] X Problems detected in kubelet:
	W0420 00:49:25.466192 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:25.466205 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:25.466234 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:25.466254 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:25.466269 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:25.466276 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:25.466287 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:49:35.482334 1644261 system_pods.go:59] 18 kube-system pods found
	I0420 00:49:35.482377 1644261 system_pods.go:61] "coredns-7db6d8ff4d-pj8wd" [ce9c9144-65d1-45f2-a6e0-65ac4c220237] Running
	I0420 00:49:35.482384 1644261 system_pods.go:61] "csi-hostpath-attacher-0" [1407d955-83ec-4b1d-ac07-d55e593f975f] Running
	I0420 00:49:35.482389 1644261 system_pods.go:61] "csi-hostpath-resizer-0" [023884e7-abc6-4359-95ba-ee8031b2db76] Running
	I0420 00:49:35.482394 1644261 system_pods.go:61] "csi-hostpathplugin-z7j5n" [b938be04-8aac-427e-a62d-e0d6ecea4fe9] Running
	I0420 00:49:35.482399 1644261 system_pods.go:61] "etcd-addons-747503" [707cce58-27c7-483a-9f12-80d354c6e443] Running
	I0420 00:49:35.482402 1644261 system_pods.go:61] "kindnet-x7szp" [910dbd2a-9863-4585-8a5d-98c1bb4817e2] Running
	I0420 00:49:35.482407 1644261 system_pods.go:61] "kube-apiserver-addons-747503" [81db4265-6e75-41b4-85b6-c7e09e1979a7] Running
	I0420 00:49:35.482411 1644261 system_pods.go:61] "kube-controller-manager-addons-747503" [f4cfdf92-3a76-49c4-b1f6-3bc7cf34cd49] Running
	I0420 00:49:35.482420 1644261 system_pods.go:61] "kube-ingress-dns-minikube" [ec712066-7b44-45dc-a961-0f7688a75714] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0420 00:49:35.482431 1644261 system_pods.go:61] "kube-proxy-cmk9r" [13976009-573c-4b43-8062-07d9a92cb809] Running
	I0420 00:49:35.482441 1644261 system_pods.go:61] "kube-scheduler-addons-747503" [4c4ccef8-4e11-425f-9dc6-178584aa294d] Running
	I0420 00:49:35.482445 1644261 system_pods.go:61] "metrics-server-c59844bb4-jmtz4" [582654f0-7046-465f-b015-d889d5397c3c] Running
	I0420 00:49:35.482458 1644261 system_pods.go:61] "nvidia-device-plugin-daemonset-8wcvh" [1dc1e685-c035-4a95-99c7-d40ef680694c] Running
	I0420 00:49:35.482462 1644261 system_pods.go:61] "registry-proxy-5c8mf" [78326941-b968-43a4-865c-3f7c843b92c7] Running
	I0420 00:49:35.482466 1644261 system_pods.go:61] "registry-sx6fv" [c3fda03d-8cd2-4cff-9835-e17c079b7e05] Running
	I0420 00:49:35.482470 1644261 system_pods.go:61] "snapshot-controller-745499f584-7chnh" [1d82f222-8775-4214-b579-247919a249be] Running
	I0420 00:49:35.482474 1644261 system_pods.go:61] "snapshot-controller-745499f584-nk457" [a90bbeca-e4e7-4d3e-9eda-bf44e5d15f2c] Running
	I0420 00:49:35.482478 1644261 system_pods.go:61] "storage-provisioner" [c64f875a-fc82-45a9-acce-a3f649735d47] Running
	I0420 00:49:35.482493 1644261 system_pods.go:74] duration metric: took 11.176551903s to wait for pod list to return data ...
	I0420 00:49:35.482501 1644261 default_sa.go:34] waiting for default service account to be created ...
	I0420 00:49:35.485056 1644261 default_sa.go:45] found service account: "default"
	I0420 00:49:35.485086 1644261 default_sa.go:55] duration metric: took 2.576218ms for default service account to be created ...
	I0420 00:49:35.485096 1644261 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 00:49:35.495868 1644261 system_pods.go:86] 18 kube-system pods found
	I0420 00:49:35.495904 1644261 system_pods.go:89] "coredns-7db6d8ff4d-pj8wd" [ce9c9144-65d1-45f2-a6e0-65ac4c220237] Running
	I0420 00:49:35.495912 1644261 system_pods.go:89] "csi-hostpath-attacher-0" [1407d955-83ec-4b1d-ac07-d55e593f975f] Running
	I0420 00:49:35.495918 1644261 system_pods.go:89] "csi-hostpath-resizer-0" [023884e7-abc6-4359-95ba-ee8031b2db76] Running
	I0420 00:49:35.495922 1644261 system_pods.go:89] "csi-hostpathplugin-z7j5n" [b938be04-8aac-427e-a62d-e0d6ecea4fe9] Running
	I0420 00:49:35.495926 1644261 system_pods.go:89] "etcd-addons-747503" [707cce58-27c7-483a-9f12-80d354c6e443] Running
	I0420 00:49:35.495931 1644261 system_pods.go:89] "kindnet-x7szp" [910dbd2a-9863-4585-8a5d-98c1bb4817e2] Running
	I0420 00:49:35.495936 1644261 system_pods.go:89] "kube-apiserver-addons-747503" [81db4265-6e75-41b4-85b6-c7e09e1979a7] Running
	I0420 00:49:35.495940 1644261 system_pods.go:89] "kube-controller-manager-addons-747503" [f4cfdf92-3a76-49c4-b1f6-3bc7cf34cd49] Running
	I0420 00:49:35.495951 1644261 system_pods.go:89] "kube-ingress-dns-minikube" [ec712066-7b44-45dc-a961-0f7688a75714] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0420 00:49:35.495962 1644261 system_pods.go:89] "kube-proxy-cmk9r" [13976009-573c-4b43-8062-07d9a92cb809] Running
	I0420 00:49:35.495977 1644261 system_pods.go:89] "kube-scheduler-addons-747503" [4c4ccef8-4e11-425f-9dc6-178584aa294d] Running
	I0420 00:49:35.495981 1644261 system_pods.go:89] "metrics-server-c59844bb4-jmtz4" [582654f0-7046-465f-b015-d889d5397c3c] Running
	I0420 00:49:35.495986 1644261 system_pods.go:89] "nvidia-device-plugin-daemonset-8wcvh" [1dc1e685-c035-4a95-99c7-d40ef680694c] Running
	I0420 00:49:35.495993 1644261 system_pods.go:89] "registry-proxy-5c8mf" [78326941-b968-43a4-865c-3f7c843b92c7] Running
	I0420 00:49:35.495999 1644261 system_pods.go:89] "registry-sx6fv" [c3fda03d-8cd2-4cff-9835-e17c079b7e05] Running
	I0420 00:49:35.496006 1644261 system_pods.go:89] "snapshot-controller-745499f584-7chnh" [1d82f222-8775-4214-b579-247919a249be] Running
	I0420 00:49:35.496011 1644261 system_pods.go:89] "snapshot-controller-745499f584-nk457" [a90bbeca-e4e7-4d3e-9eda-bf44e5d15f2c] Running
	I0420 00:49:35.496015 1644261 system_pods.go:89] "storage-provisioner" [c64f875a-fc82-45a9-acce-a3f649735d47] Running
	I0420 00:49:35.496023 1644261 system_pods.go:126] duration metric: took 10.920416ms to wait for k8s-apps to be running ...
	I0420 00:49:35.496034 1644261 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 00:49:35.496098 1644261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:49:35.510334 1644261 system_svc.go:56] duration metric: took 14.291022ms WaitForService to wait for kubelet
	I0420 00:49:35.510421 1644261 kubeadm.go:576] duration metric: took 2m19.328937561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:49:35.510458 1644261 node_conditions.go:102] verifying NodePressure condition ...
	I0420 00:49:35.513887 1644261 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0420 00:49:35.513920 1644261 node_conditions.go:123] node cpu capacity is 2
	I0420 00:49:35.513932 1644261 node_conditions.go:105] duration metric: took 3.453007ms to run NodePressure ...
	I0420 00:49:35.513944 1644261 start.go:240] waiting for startup goroutines ...
	I0420 00:49:35.513972 1644261 start.go:245] waiting for cluster config update ...
	I0420 00:49:35.514000 1644261 start.go:254] writing updated cluster config ...
	I0420 00:49:35.514532 1644261 ssh_runner.go:195] Run: rm -f paused
	I0420 00:49:35.939859 1644261 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 00:49:35.941995 1644261 out.go:177] * Done! kubectl is now configured to use "addons-747503" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 20 00:54:45 addons-747503 crio[920]: time="2024-04-20 00:54:45.360739825Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 20 00:54:45 addons-747503 crio[920]: time="2024-04-20 00:54:45.423876535Z" level=info msg="Created container 8818edb53c198739c3fe05adfe4f977161ad9806a2610cf79caf6eb4affa8bbb: default/hello-world-app-86c47465fc-j7hjs/hello-world-app" id=d2399e49-b072-461a-a099-bb2d11a5a6a2 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 20 00:54:45 addons-747503 crio[920]: time="2024-04-20 00:54:45.424744341Z" level=info msg="Starting container: 8818edb53c198739c3fe05adfe4f977161ad9806a2610cf79caf6eb4affa8bbb" id=4cdde2cb-cce7-4514-8350-4760e6f53f45 name=/runtime.v1.RuntimeService/StartContainer
	Apr 20 00:54:45 addons-747503 crio[920]: time="2024-04-20 00:54:45.432252468Z" level=info msg="Started container" PID=9144 containerID=8818edb53c198739c3fe05adfe4f977161ad9806a2610cf79caf6eb4affa8bbb description=default/hello-world-app-86c47465fc-j7hjs/hello-world-app id=4cdde2cb-cce7-4514-8350-4760e6f53f45 name=/runtime.v1.RuntimeService/StartContainer sandboxID=794425e63ed03b2adefd1957ebb2c482f0f81978b5dc0fd339dffcd01ca4fff5
	Apr 20 00:54:45 addons-747503 conmon[9132]: conmon 8818edb53c198739c3fe <ninfo>: container 9144 exited with status 1
	Apr 20 00:54:46 addons-747503 crio[920]: time="2024-04-20 00:54:46.384511473Z" level=info msg="Removing container: bb8572fed906e14b8da47fff2a12bfbec6de2408ac76c970d290a88f60e76acf" id=2038c77a-d3f3-486a-b5f6-abc8521be767 name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 20 00:54:46 addons-747503 crio[920]: time="2024-04-20 00:54:46.410152906Z" level=info msg="Removed container bb8572fed906e14b8da47fff2a12bfbec6de2408ac76c970d290a88f60e76acf: default/hello-world-app-86c47465fc-j7hjs/hello-world-app" id=2038c77a-d3f3-486a-b5f6-abc8521be767 name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 20 00:56:08 addons-747503 crio[920]: time="2024-04-20 00:56:08.358759588Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=c8f80222-84f2-4879-af4c-066a21860cc3 name=/runtime.v1.ImageService/ImageStatus
	Apr 20 00:56:08 addons-747503 crio[920]: time="2024-04-20 00:56:08.359000606Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=c8f80222-84f2-4879-af4c-066a21860cc3 name=/runtime.v1.ImageService/ImageStatus
	Apr 20 00:56:08 addons-747503 crio[920]: time="2024-04-20 00:56:08.359770676Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=51fce3a7-c130-48cb-9cfb-2684c364685d name=/runtime.v1.ImageService/ImageStatus
	Apr 20 00:56:08 addons-747503 crio[920]: time="2024-04-20 00:56:08.359950289Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=51fce3a7-c130-48cb-9cfb-2684c364685d name=/runtime.v1.ImageService/ImageStatus
	Apr 20 00:56:08 addons-747503 crio[920]: time="2024-04-20 00:56:08.360925013Z" level=info msg="Creating container: default/hello-world-app-86c47465fc-j7hjs/hello-world-app" id=6294dee8-4458-46df-9e7c-fae90ff91f10 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 20 00:56:08 addons-747503 crio[920]: time="2024-04-20 00:56:08.361032399Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 20 00:56:08 addons-747503 crio[920]: time="2024-04-20 00:56:08.446225304Z" level=info msg="Created container 174184f159113ed72fdfeffaa1836e6ae2ba344ccbfa80f82b77aa40c598ce61: default/hello-world-app-86c47465fc-j7hjs/hello-world-app" id=6294dee8-4458-46df-9e7c-fae90ff91f10 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 20 00:56:08 addons-747503 crio[920]: time="2024-04-20 00:56:08.447028538Z" level=info msg="Starting container: 174184f159113ed72fdfeffaa1836e6ae2ba344ccbfa80f82b77aa40c598ce61" id=22f5f857-d4f0-49ef-83d2-c053d375587b name=/runtime.v1.RuntimeService/StartContainer
	Apr 20 00:56:08 addons-747503 crio[920]: time="2024-04-20 00:56:08.452710168Z" level=info msg="Started container" PID=9202 containerID=174184f159113ed72fdfeffaa1836e6ae2ba344ccbfa80f82b77aa40c598ce61 description=default/hello-world-app-86c47465fc-j7hjs/hello-world-app id=22f5f857-d4f0-49ef-83d2-c053d375587b name=/runtime.v1.RuntimeService/StartContainer sandboxID=794425e63ed03b2adefd1957ebb2c482f0f81978b5dc0fd339dffcd01ca4fff5
	Apr 20 00:56:08 addons-747503 conmon[9190]: conmon 174184f159113ed72fdf <ninfo>: container 9202 exited with status 1
	Apr 20 00:56:08 addons-747503 crio[920]: time="2024-04-20 00:56:08.546278622Z" level=info msg="Removing container: 8818edb53c198739c3fe05adfe4f977161ad9806a2610cf79caf6eb4affa8bbb" id=6f46c1d5-da2e-4116-b4e3-a24c70133767 name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 20 00:56:08 addons-747503 crio[920]: time="2024-04-20 00:56:08.566114141Z" level=info msg="Removed container 8818edb53c198739c3fe05adfe4f977161ad9806a2610cf79caf6eb4affa8bbb: default/hello-world-app-86c47465fc-j7hjs/hello-world-app" id=6f46c1d5-da2e-4116-b4e3-a24c70133767 name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 20 00:56:18 addons-747503 crio[920]: time="2024-04-20 00:56:18.097205883Z" level=info msg="Stopping container: d44171fb373039e48daa7df3f71859178097a97bdc07e7327cd3f1aa3b4e1a7d (timeout: 30s)" id=63da96d5-5a08-4046-8e60-7eedec65a5ad name=/runtime.v1.RuntimeService/StopContainer
	Apr 20 00:56:19 addons-747503 crio[920]: time="2024-04-20 00:56:19.259868054Z" level=info msg="Stopped container d44171fb373039e48daa7df3f71859178097a97bdc07e7327cd3f1aa3b4e1a7d: kube-system/metrics-server-c59844bb4-jmtz4/metrics-server" id=63da96d5-5a08-4046-8e60-7eedec65a5ad name=/runtime.v1.RuntimeService/StopContainer
	Apr 20 00:56:19 addons-747503 crio[920]: time="2024-04-20 00:56:19.260831439Z" level=info msg="Stopping pod sandbox: 48679652e7ffe1846e514ca96ae2e313ce45268273357b59d714b84ac6423511" id=5743d349-0909-473a-af5c-9efb8fb0b247 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 20 00:56:19 addons-747503 crio[920]: time="2024-04-20 00:56:19.261053955Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-jmtz4 Namespace:kube-system ID:48679652e7ffe1846e514ca96ae2e313ce45268273357b59d714b84ac6423511 UID:582654f0-7046-465f-b015-d889d5397c3c NetNS:/var/run/netns/35e723bc-3348-47b1-aa12-d1ace857181a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 20 00:56:19 addons-747503 crio[920]: time="2024-04-20 00:56:19.261189861Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-jmtz4 from CNI network \"kindnet\" (type=ptp)"
	Apr 20 00:56:19 addons-747503 crio[920]: time="2024-04-20 00:56:19.295700961Z" level=info msg="Stopped pod sandbox: 48679652e7ffe1846e514ca96ae2e313ce45268273357b59d714b84ac6423511" id=5743d349-0909-473a-af5c-9efb8fb0b247 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	174184f159113       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                        11 seconds ago      Exited              hello-world-app           5                   794425e63ed03       hello-world-app-86c47465fc-j7hjs
	3059b1b73e48e       docker.io/library/nginx@sha256:7bd88800d8c18d4f73feeee25e04fcdbeecfc5e0a2b7254a90f4816bb67beadd                         5 minutes ago       Running             nginx                     0                   3d907a9a1b360       nginx
	065eeb203edc3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            7 minutes ago       Running             gcp-auth                  0                   12318430bbe8d       gcp-auth-5db96cd9b4-dg9c5
	c7bd8cacd1c82       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         7 minutes ago       Running             yakd                      0                   b1192b4bbd9c1       yakd-dashboard-5ddbf7d777-q5cff
	d44171fb37303       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago       Exited              metrics-server            0                   48679652e7ffe       metrics-server-c59844bb4-jmtz4
	22c56a3e8a0fe       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   d7e961b6341a3       storage-provisioner
	dfc51e1c1bccd       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        8 minutes ago       Running             coredns                   0                   1d5e91a66a006       coredns-7db6d8ff4d-pj8wd
	b21e49c0bda54       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                                        8 minutes ago       Running             kindnet-cni               0                   48b6b802564de       kindnet-x7szp
	8504f24d60ff9       cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f                                                        9 minutes ago       Running             kube-proxy                0                   a5fb2119d00b2       kube-proxy-cmk9r
	efdbc1a5337c8       547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a                                                        9 minutes ago       Running             kube-scheduler            0                   db745aaf12fb3       kube-scheduler-addons-747503
	d7b31a1429803       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb                                                        9 minutes ago       Running             kube-apiserver            0                   e3358216037d1       kube-apiserver-addons-747503
	120c278a1bb92       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1                                                        9 minutes ago       Running             kube-controller-manager   0                   32129d92cb9e3       kube-controller-manager-addons-747503
	dc5579e3b8be4       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        9 minutes ago       Running             etcd                      0                   0793765290d5b       etcd-addons-747503
	
	
	==> coredns [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df] <==
	[INFO] 10.244.0.20:39271 - 49899 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000182116s
	[INFO] 10.244.0.20:39271 - 36853 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00015339s
	[INFO] 10.244.0.20:39271 - 54108 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071358s
	[INFO] 10.244.0.20:39271 - 23540 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064244s
	[INFO] 10.244.0.20:39271 - 36450 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001425418s
	[INFO] 10.244.0.20:39271 - 1160 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001283564s
	[INFO] 10.244.0.20:39271 - 55387 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069496s
	[INFO] 10.244.0.20:50025 - 38066 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000115434s
	[INFO] 10.244.0.20:58091 - 47793 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042502s
	[INFO] 10.244.0.20:50025 - 53522 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067075s
	[INFO] 10.244.0.20:50025 - 60527 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046842s
	[INFO] 10.244.0.20:50025 - 38545 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050107s
	[INFO] 10.244.0.20:58091 - 3541 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000036602s
	[INFO] 10.244.0.20:58091 - 40598 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040417s
	[INFO] 10.244.0.20:50025 - 10978 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030498s
	[INFO] 10.244.0.20:50025 - 47695 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048064s
	[INFO] 10.244.0.20:58091 - 53055 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050575s
	[INFO] 10.244.0.20:58091 - 54793 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037406s
	[INFO] 10.244.0.20:58091 - 6655 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000079702s
	[INFO] 10.244.0.20:50025 - 33288 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001386388s
	[INFO] 10.244.0.20:58091 - 16986 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004227329s
	[INFO] 10.244.0.20:50025 - 6931 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003492319s
	[INFO] 10.244.0.20:50025 - 7397 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061315s
	[INFO] 10.244.0.20:58091 - 1995 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001219836s
	[INFO] 10.244.0.20:58091 - 57192 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006737s
	
	
	==> describe nodes <==
	Name:               addons-747503
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-747503
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=addons-747503
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T00_47_03_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-747503
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:46:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-747503
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:56:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:53:42 +0000   Sat, 20 Apr 2024 00:46:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:53:42 +0000   Sat, 20 Apr 2024 00:46:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:53:42 +0000   Sat, 20 Apr 2024 00:46:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:53:42 +0000   Sat, 20 Apr 2024 00:47:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-747503
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb345c96e51549588e3445f8f88cea8c
	  System UUID:                338aa8bd-646a-4cfc-b77a-f650366b6c8a
	  Boot ID:                    cdaae8f5-66dd-4dda-afdc-9b84bbb262c1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-j7hjs         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  gcp-auth                    gcp-auth-5db96cd9b4-dg9c5                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m53s
	  kube-system                 coredns-7db6d8ff4d-pj8wd                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m3s
	  kube-system                 etcd-addons-747503                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m17s
	  kube-system                 kindnet-x7szp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m3s
	  kube-system                 kube-apiserver-addons-747503             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-addons-747503    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-proxy-cmk9r                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	  kube-system                 kube-scheduler-addons-747503             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-q5cff          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     8m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m57s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node addons-747503 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node addons-747503 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x8 over 9m24s)  kubelet          Node addons-747503 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m17s                  kubelet          Node addons-747503 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s                  kubelet          Node addons-747503 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s                  kubelet          Node addons-747503 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m4s                   node-controller  Node addons-747503 event: Registered Node addons-747503 in Controller
	  Normal  NodeReady                8m29s                  kubelet          Node addons-747503 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000807] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000960] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000be586629
	[  +0.001092] FS-Cache: N-key=[8] '15d8c90000000000'
	[  +0.002828] FS-Cache: Duplicate cookie detected
	[  +0.000717] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=00000000ead4e9ad{9p.inode} n=000000008f558ce4
	[  +0.001060] FS-Cache: O-key=[8] '15d8c90000000000'
	[  +0.000703] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000906] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000f46698ff
	[  +0.001011] FS-Cache: N-key=[8] '15d8c90000000000'
	[  +3.061970] FS-Cache: Duplicate cookie detected
	[  +0.000754] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001064] FS-Cache: O-cookie d=00000000ead4e9ad{9p.inode} n=00000000ea440894
	[  +0.001029] FS-Cache: O-key=[8] '14d8c90000000000'
	[  +0.000778] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001045] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000999f4db4
	[  +0.001563] FS-Cache: N-key=[8] '14d8c90000000000'
	[  +0.297624] FS-Cache: Duplicate cookie detected
	[  +0.000690] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000919] FS-Cache: O-cookie d=00000000ead4e9ad{9p.inode} n=00000000e5d6a697
	[  +0.001014] FS-Cache: O-key=[8] '1ad8c90000000000'
	[  +0.000691] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001016] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000be586629
	[  +0.001047] FS-Cache: N-key=[8] '1ad8c90000000000'
	[Apr20 00:19] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330] <==
	{"level":"info","ts":"2024-04-20T00:46:57.326346Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T00:46:57.327877Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-20T00:46:57.329336Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-20T00:46:57.333604Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:46:57.388525Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:46:57.388574Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-04-20T00:47:16.924241Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.94691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-cmk9r\" ","response":"range_response_count:1 size:4633"}
	{"level":"info","ts":"2024-04-20T00:47:16.924761Z","caller":"traceutil/trace.go:171","msg":"trace[1215544873] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-cmk9r; range_end:; response_count:1; response_revision:378; }","duration":"125.496234ms","start":"2024-04-20T00:47:16.799258Z","end":"2024-04-20T00:47:16.924754Z","steps":["trace[1215544873] 'agreement among raft nodes before linearized reading'  (duration: 124.890747ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:16.924393Z","caller":"traceutil/trace.go:171","msg":"trace[153633669] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"125.22924ms","start":"2024-04-20T00:47:16.799148Z","end":"2024-04-20T00:47:16.924378Z","steps":["trace[153633669] 'process raft request'  (duration: 124.905082ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:16.924546Z","caller":"traceutil/trace.go:171","msg":"trace[925466315] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"125.445036ms","start":"2024-04-20T00:47:16.799094Z","end":"2024-04-20T00:47:16.924539Z","steps":["trace[925466315] 'process raft request'  (duration: 124.874059ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:16.924681Z","caller":"traceutil/trace.go:171","msg":"trace[470147017] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"125.389169ms","start":"2024-04-20T00:47:16.799285Z","end":"2024-04-20T00:47:16.924674Z","steps":["trace[470147017] 'process raft request'  (duration: 124.793429ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:16.92472Z","caller":"traceutil/trace.go:171","msg":"trace[1120083311] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"125.391983ms","start":"2024-04-20T00:47:16.799322Z","end":"2024-04-20T00:47:16.924714Z","steps":["trace[1120083311] 'process raft request'  (duration: 124.790048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:18.67798Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.383847ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-x7szp\" ","response":"range_response_count:1 size:4910"}
	{"level":"info","ts":"2024-04-20T00:47:18.755921Z","caller":"traceutil/trace.go:171","msg":"trace[1811675749] linearizableReadLoop","detail":"{readStateIndex:402; appliedIndex:402; }","duration":"118.796807ms","start":"2024-04-20T00:47:18.637099Z","end":"2024-04-20T00:47:18.755896Z","steps":["trace[1811675749] 'read index received'  (duration: 118.790883ms)","trace[1811675749] 'applied index is now lower than readState.Index'  (duration: 4.701µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-20T00:47:18.789714Z","caller":"traceutil/trace.go:171","msg":"trace[1490804175] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-x7szp; range_end:; response_count:1; response_revision:389; }","duration":"308.029626ms","start":"2024-04-20T00:47:18.448578Z","end":"2024-04-20T00:47:18.756608Z","steps":["trace[1490804175] 'agreement among raft nodes before linearized reading'  (duration: 229.28301ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:19.14332Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:47:18.448539Z","time spent":"694.609452ms","remote":"127.0.0.1:48224","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":4934,"request content":"key:\"/registry/pods/kube-system/kindnet-x7szp\" "}
	{"level":"warn","ts":"2024-04-20T00:47:19.166679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"490.097119ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-7db6d8ff4d-pj8wd.17c7d681fb332ee5\" ","response":"range_response_count:1 size:844"}
	{"level":"info","ts":"2024-04-20T00:47:19.183579Z","caller":"traceutil/trace.go:171","msg":"trace[1385382603] range","detail":"{range_begin:/registry/events/kube-system/coredns-7db6d8ff4d-pj8wd.17c7d681fb332ee5; range_end:; response_count:1; response_revision:389; }","duration":"497.334583ms","start":"2024-04-20T00:47:18.676557Z","end":"2024-04-20T00:47:19.173891Z","steps":["trace[1385382603] 'agreement among raft nodes before linearized reading'  (duration: 490.031645ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:19.189018Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:47:18.676516Z","time spent":"512.471167ms","remote":"127.0.0.1:48100","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":868,"request content":"key:\"/registry/events/kube-system/coredns-7db6d8ff4d-pj8wd.17c7d681fb332ee5\" "}
	{"level":"warn","ts":"2024-04-20T00:47:19.186049Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"429.718807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-04-20T00:47:19.186088Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"508.974865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-20T00:47:19.191942Z","caller":"traceutil/trace.go:171","msg":"trace[2029096551] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:389; }","duration":"514.81594ms","start":"2024-04-20T00:47:18.677109Z","end":"2024-04-20T00:47:19.191925Z","steps":["trace[2029096551] 'agreement among raft nodes before linearized reading'  (duration: 508.967169ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:19.25392Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:47:18.677088Z","time spent":"576.805526ms","remote":"127.0.0.1:48534","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":0,"response size":29,"request content":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" "}
	{"level":"info","ts":"2024-04-20T00:47:19.224875Z","caller":"traceutil/trace.go:171","msg":"trace[1801637594] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:389; }","duration":"468.545511ms","start":"2024-04-20T00:47:18.756311Z","end":"2024-04-20T00:47:19.224857Z","steps":["trace[1801637594] 'agreement among raft nodes before linearized reading'  (duration: 429.691937ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:19.254181Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:47:18.756257Z","time spent":"497.915488ms","remote":"127.0.0.1:48514","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":0,"response size":29,"request content":"key:\"/registry/deployments/default/cloud-spanner-emulator\" "}
	
	
	==> gcp-auth [065eeb203edc3606ff24136ef272bf67f73b81ea9764ef0b86090be0bcf9d3e6] <==
	2024/04/20 00:48:57 GCP Auth Webhook started!
	2024/04/20 00:49:47 Ready to marshal response ...
	2024/04/20 00:49:47 Ready to write response ...
	2024/04/20 00:49:47 Ready to marshal response ...
	2024/04/20 00:49:47 Ready to write response ...
	2024/04/20 00:50:05 Ready to marshal response ...
	2024/04/20 00:50:05 Ready to write response ...
	2024/04/20 00:50:05 Ready to marshal response ...
	2024/04/20 00:50:05 Ready to write response ...
	2024/04/20 00:50:12 Ready to marshal response ...
	2024/04/20 00:50:12 Ready to write response ...
	2024/04/20 00:50:14 Ready to marshal response ...
	2024/04/20 00:50:14 Ready to write response ...
	2024/04/20 00:50:58 Ready to marshal response ...
	2024/04/20 00:50:58 Ready to write response ...
	2024/04/20 00:53:19 Ready to marshal response ...
	2024/04/20 00:53:19 Ready to write response ...
	
	
	==> kernel <==
	 00:56:19 up  7:38,  0 users,  load average: 0.08, 0.76, 1.65
	Linux addons-747503 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52] <==
	I0420 00:54:10.809887       1 main.go:227] handling current node
	I0420 00:54:20.814362       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:54:20.814409       1 main.go:227] handling current node
	I0420 00:54:30.822006       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:54:30.822036       1 main.go:227] handling current node
	I0420 00:54:40.828917       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:54:40.828949       1 main.go:227] handling current node
	I0420 00:54:50.840061       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:54:50.840089       1 main.go:227] handling current node
	I0420 00:55:00.843829       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:55:00.843859       1 main.go:227] handling current node
	I0420 00:55:10.847573       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:55:10.847604       1 main.go:227] handling current node
	I0420 00:55:20.851112       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:55:20.851142       1 main.go:227] handling current node
	I0420 00:55:30.863607       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:55:30.863636       1 main.go:227] handling current node
	I0420 00:55:40.867921       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:55:40.867947       1 main.go:227] handling current node
	I0420 00:55:50.873581       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:55:50.873794       1 main.go:227] handling current node
	I0420 00:56:00.877994       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:56:00.878023       1 main.go:227] handling current node
	I0420 00:56:10.887029       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:56:10.887060       1 main.go:227] handling current node
	
	
	==> kube-apiserver [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0420 00:49:01.187795       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.3.226:443: connect: connection refused
	E0420 00:49:01.193314       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.3.226:443: connect: connection refused
	E0420 00:49:01.214414       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.3.226:443: connect: connection refused
	I0420 00:49:01.426379       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	http2: server: error reading preface from client 192.168.49.1:38014: read tcp 192.168.49.2:8443->192.168.49.1:38014: read: connection reset by peer
	I0420 00:50:00.595117       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0420 00:50:28.670545       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:50:28.670597       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:50:28.715804       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:50:28.715853       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:50:28.748957       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:50:28.749037       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:50:28.834347       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:50:28.836069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0420 00:50:28.986860       1 watch.go:250] http2: stream closed
	W0420 00:50:29.716466       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0420 00:50:29.834147       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0420 00:50:29.854747       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0420 00:50:30.599412       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0420 00:50:58.242507       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0420 00:50:58.551717       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.222.26"}
	I0420 00:53:19.962474       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.159.228"}
	I0420 00:53:53.100871       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0420 00:53:54.129115       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e] <==
	I0420 00:54:18.378285       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="42.304µs"
	W0420 00:54:30.739707       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:54:30.739744       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:54:40.382439       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:54:40.382492       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:54:45.588267       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:54:45.588308       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0420 00:54:46.401963       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="46.415µs"
	W0420 00:54:51.868290       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:54:51.868436       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0420 00:54:59.372288       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="41.311µs"
	W0420 00:55:17.278542       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:55:17.278580       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:55:25.729702       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:55:25.729737       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:55:27.956731       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:55:27.956770       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:55:37.106964       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:55:37.107003       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0420 00:56:08.563235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="352.342µs"
	W0420 00:56:11.285055       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:56:11.285093       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:56:17.268145       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:56:17.268184       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0420 00:56:18.067665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="8.68µs"
	
	
	==> kube-proxy [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20] <==
	I0420 00:47:21.144641       1 server_linux.go:69] "Using iptables proxy"
	I0420 00:47:21.218151       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0420 00:47:21.752491       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0420 00:47:21.752619       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:47:21.778013       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0420 00:47:21.778127       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0420 00:47:21.778180       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:47:21.778432       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:47:21.778963       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:47:21.779916       1 config.go:192] "Starting service config controller"
	I0420 00:47:21.780016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:47:21.780072       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:47:21.780100       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:47:21.780654       1 config.go:319] "Starting node config controller"
	I0420 00:47:21.780711       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 00:47:21.880269       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:47:21.885308       1 shared_informer.go:320] Caches are synced for node config
	I0420 00:47:21.885552       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929] <==
	W0420 00:46:59.940609       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:46:59.940660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:46:59.940762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 00:46:59.940813       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 00:46:59.940912       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 00:46:59.940952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 00:46:59.941040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 00:46:59.941188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 00:46:59.941145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 00:46:59.941293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0420 00:47:00.905621       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:47:00.905761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:47:00.906996       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 00:47:00.907089       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 00:47:00.942823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 00:47:00.942862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 00:47:00.951864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 00:47:00.952007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 00:47:01.030920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 00:47:01.031101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 00:47:01.032673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0420 00:47:01.032706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0420 00:47:01.039542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 00:47:01.039661       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0420 00:47:03.025762       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 00:54:46 addons-747503 kubelet[1518]: E0420 00:54:46.382712    1518 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-j7hjs_default(c7fb6036-110e-4661-aca3-2f00006c27de)\"" pod="default/hello-world-app-86c47465fc-j7hjs" podUID="c7fb6036-110e-4661-aca3-2f00006c27de"
	Apr 20 00:54:59 addons-747503 kubelet[1518]: I0420 00:54:59.357589    1518 scope.go:117] "RemoveContainer" containerID="8818edb53c198739c3fe05adfe4f977161ad9806a2610cf79caf6eb4affa8bbb"
	Apr 20 00:54:59 addons-747503 kubelet[1518]: E0420 00:54:59.357857    1518 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-j7hjs_default(c7fb6036-110e-4661-aca3-2f00006c27de)\"" pod="default/hello-world-app-86c47465fc-j7hjs" podUID="c7fb6036-110e-4661-aca3-2f00006c27de"
	Apr 20 00:55:12 addons-747503 kubelet[1518]: I0420 00:55:12.358688    1518 scope.go:117] "RemoveContainer" containerID="8818edb53c198739c3fe05adfe4f977161ad9806a2610cf79caf6eb4affa8bbb"
	Apr 20 00:55:12 addons-747503 kubelet[1518]: E0420 00:55:12.358966    1518 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-j7hjs_default(c7fb6036-110e-4661-aca3-2f00006c27de)\"" pod="default/hello-world-app-86c47465fc-j7hjs" podUID="c7fb6036-110e-4661-aca3-2f00006c27de"
	Apr 20 00:55:25 addons-747503 kubelet[1518]: I0420 00:55:25.358027    1518 scope.go:117] "RemoveContainer" containerID="8818edb53c198739c3fe05adfe4f977161ad9806a2610cf79caf6eb4affa8bbb"
	Apr 20 00:55:25 addons-747503 kubelet[1518]: E0420 00:55:25.358423    1518 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-j7hjs_default(c7fb6036-110e-4661-aca3-2f00006c27de)\"" pod="default/hello-world-app-86c47465fc-j7hjs" podUID="c7fb6036-110e-4661-aca3-2f00006c27de"
	Apr 20 00:55:40 addons-747503 kubelet[1518]: I0420 00:55:40.357992    1518 scope.go:117] "RemoveContainer" containerID="8818edb53c198739c3fe05adfe4f977161ad9806a2610cf79caf6eb4affa8bbb"
	Apr 20 00:55:40 addons-747503 kubelet[1518]: E0420 00:55:40.358259    1518 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-j7hjs_default(c7fb6036-110e-4661-aca3-2f00006c27de)\"" pod="default/hello-world-app-86c47465fc-j7hjs" podUID="c7fb6036-110e-4661-aca3-2f00006c27de"
	Apr 20 00:55:54 addons-747503 kubelet[1518]: I0420 00:55:54.357894    1518 scope.go:117] "RemoveContainer" containerID="8818edb53c198739c3fe05adfe4f977161ad9806a2610cf79caf6eb4affa8bbb"
	Apr 20 00:55:54 addons-747503 kubelet[1518]: E0420 00:55:54.358168    1518 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-j7hjs_default(c7fb6036-110e-4661-aca3-2f00006c27de)\"" pod="default/hello-world-app-86c47465fc-j7hjs" podUID="c7fb6036-110e-4661-aca3-2f00006c27de"
	Apr 20 00:56:08 addons-747503 kubelet[1518]: I0420 00:56:08.358224    1518 scope.go:117] "RemoveContainer" containerID="8818edb53c198739c3fe05adfe4f977161ad9806a2610cf79caf6eb4affa8bbb"
	Apr 20 00:56:08 addons-747503 kubelet[1518]: I0420 00:56:08.544434    1518 scope.go:117] "RemoveContainer" containerID="8818edb53c198739c3fe05adfe4f977161ad9806a2610cf79caf6eb4affa8bbb"
	Apr 20 00:56:08 addons-747503 kubelet[1518]: I0420 00:56:08.544741    1518 scope.go:117] "RemoveContainer" containerID="174184f159113ed72fdfeffaa1836e6ae2ba344ccbfa80f82b77aa40c598ce61"
	Apr 20 00:56:08 addons-747503 kubelet[1518]: E0420 00:56:08.544992    1518 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-j7hjs_default(c7fb6036-110e-4661-aca3-2f00006c27de)\"" pod="default/hello-world-app-86c47465fc-j7hjs" podUID="c7fb6036-110e-4661-aca3-2f00006c27de"
	Apr 20 00:56:19 addons-747503 kubelet[1518]: I0420 00:56:19.468057    1518 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6c65\" (UniqueName: \"kubernetes.io/projected/582654f0-7046-465f-b015-d889d5397c3c-kube-api-access-k6c65\") pod \"582654f0-7046-465f-b015-d889d5397c3c\" (UID: \"582654f0-7046-465f-b015-d889d5397c3c\") "
	Apr 20 00:56:19 addons-747503 kubelet[1518]: I0420 00:56:19.468142    1518 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/582654f0-7046-465f-b015-d889d5397c3c-tmp-dir\") pod \"582654f0-7046-465f-b015-d889d5397c3c\" (UID: \"582654f0-7046-465f-b015-d889d5397c3c\") "
	Apr 20 00:56:19 addons-747503 kubelet[1518]: I0420 00:56:19.468489    1518 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/582654f0-7046-465f-b015-d889d5397c3c-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "582654f0-7046-465f-b015-d889d5397c3c" (UID: "582654f0-7046-465f-b015-d889d5397c3c"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Apr 20 00:56:19 addons-747503 kubelet[1518]: I0420 00:56:19.473315    1518 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582654f0-7046-465f-b015-d889d5397c3c-kube-api-access-k6c65" (OuterVolumeSpecName: "kube-api-access-k6c65") pod "582654f0-7046-465f-b015-d889d5397c3c" (UID: "582654f0-7046-465f-b015-d889d5397c3c"). InnerVolumeSpecName "kube-api-access-k6c65". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 20 00:56:19 addons-747503 kubelet[1518]: I0420 00:56:19.566108    1518 scope.go:117] "RemoveContainer" containerID="d44171fb373039e48daa7df3f71859178097a97bdc07e7327cd3f1aa3b4e1a7d"
	Apr 20 00:56:19 addons-747503 kubelet[1518]: I0420 00:56:19.568488    1518 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k6c65\" (UniqueName: \"kubernetes.io/projected/582654f0-7046-465f-b015-d889d5397c3c-kube-api-access-k6c65\") on node \"addons-747503\" DevicePath \"\""
	Apr 20 00:56:19 addons-747503 kubelet[1518]: I0420 00:56:19.568510    1518 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/582654f0-7046-465f-b015-d889d5397c3c-tmp-dir\") on node \"addons-747503\" DevicePath \"\""
	Apr 20 00:56:19 addons-747503 kubelet[1518]: I0420 00:56:19.604843    1518 scope.go:117] "RemoveContainer" containerID="d44171fb373039e48daa7df3f71859178097a97bdc07e7327cd3f1aa3b4e1a7d"
	Apr 20 00:56:19 addons-747503 kubelet[1518]: E0420 00:56:19.606417    1518 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d44171fb373039e48daa7df3f71859178097a97bdc07e7327cd3f1aa3b4e1a7d\": container with ID starting with d44171fb373039e48daa7df3f71859178097a97bdc07e7327cd3f1aa3b4e1a7d not found: ID does not exist" containerID="d44171fb373039e48daa7df3f71859178097a97bdc07e7327cd3f1aa3b4e1a7d"
	Apr 20 00:56:19 addons-747503 kubelet[1518]: I0420 00:56:19.606461    1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d44171fb373039e48daa7df3f71859178097a97bdc07e7327cd3f1aa3b4e1a7d"} err="failed to get container status \"d44171fb373039e48daa7df3f71859178097a97bdc07e7327cd3f1aa3b4e1a7d\": rpc error: code = NotFound desc = could not find container \"d44171fb373039e48daa7df3f71859178097a97bdc07e7327cd3f1aa3b4e1a7d\": container with ID starting with d44171fb373039e48daa7df3f71859178097a97bdc07e7327cd3f1aa3b4e1a7d not found: ID does not exist"
	
	
	==> storage-provisioner [22c56a3e8a0fed567d434d23c22e5fb9e361b66b1c454f968e6ca7a6a7da876d] <==
	I0420 00:47:51.832097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0420 00:47:51.868002       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0420 00:47:51.868049       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0420 00:47:51.883550       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0420 00:47:51.884602       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69e938c5-ddcb-47d2-89e5-2e78c1a90077", APIVersion:"v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-747503_fa266937-e3d1-47aa-bd72-27b9ca80792a became leader
	I0420 00:47:51.887962       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-747503_fa266937-e3d1-47aa-bd72-27b9ca80792a!
	I0420 00:47:51.988837       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-747503_fa266937-e3d1-47aa-bd72-27b9ca80792a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-747503 -n addons-747503
helpers_test.go:261: (dbg) Run:  kubectl --context addons-747503 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (342.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-747503 --alsologtostderr -v=1
addons_test.go:824: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-747503 --alsologtostderr -v=1: exit status 11 (574.20017ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 00:50:35.610745 1653024 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:50:35.611699 1653024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:50:35.611716 1653024 out.go:304] Setting ErrFile to fd 2...
	I0420 00:50:35.611723 1653024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:50:35.612047 1653024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	I0420 00:50:35.612385 1653024 mustload.go:65] Loading cluster: addons-747503
	I0420 00:50:35.612857 1653024 config.go:182] Loaded profile config "addons-747503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:50:35.612898 1653024 addons.go:597] checking whether the cluster is paused
	I0420 00:50:35.613062 1653024 config.go:182] Loaded profile config "addons-747503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:50:35.613095 1653024 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:50:35.613637 1653024 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:50:35.642095 1653024 ssh_runner.go:195] Run: systemctl --version
	I0420 00:50:35.642159 1653024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:50:35.659516 1653024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:50:35.761849 1653024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 00:50:35.762005 1653024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 00:50:35.808182 1653024 cri.go:89] found id: "97e581274b9eaf6bdffbfc2dee9a0dbfa70878a170e9b1d5127d8e45553a3fa5"
	I0420 00:50:35.808203 1653024 cri.go:89] found id: "d44171fb373039e48daa7df3f71859178097a97bdc07e7327cd3f1aa3b4e1a7d"
	I0420 00:50:35.808208 1653024 cri.go:89] found id: "22c56a3e8a0fed567d434d23c22e5fb9e361b66b1c454f968e6ca7a6a7da876d"
	I0420 00:50:35.808211 1653024 cri.go:89] found id: "dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:50:35.808214 1653024 cri.go:89] found id: "b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:50:35.808218 1653024 cri.go:89] found id: "8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:50:35.808225 1653024 cri.go:89] found id: "efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:50:35.808228 1653024 cri.go:89] found id: "d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:50:35.808231 1653024 cri.go:89] found id: "120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:50:35.808237 1653024 cri.go:89] found id: "dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:50:35.808240 1653024 cri.go:89] found id: ""
	I0420 00:50:35.808294 1653024 ssh_runner.go:195] Run: sudo runc list -f json
	I0420 00:50:35.845734 1653024 out.go:177] 
	W0420 00:50:35.848374 1653024 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-20T00:50:35Z" level=error msg="stat /run/runc/e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-20T00:50:35Z" level=error msg="stat /run/runc/e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20: no such file or directory"
	
	W0420 00:50:35.848478 1653024 out.go:239] * 
	* 
	W0420 00:50:36.115426 1653024 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0420 00:50:36.118351 1653024 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:826: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-747503 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-747503
helpers_test.go:235: (dbg) docker inspect addons-747503:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56",
	        "Created": "2024-04-20T00:46:38.106832296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1644719,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-20T00:46:38.423221308Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3b2d88ca3ca9b0dbaf60124ea2550b937bd64c7063d7cb640718ddb37cba13b1",
	        "ResolvConfPath": "/var/lib/docker/containers/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56/hostname",
	        "HostsPath": "/var/lib/docker/containers/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56/hosts",
	        "LogPath": "/var/lib/docker/containers/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56/038fb1234c5ed1428cb2e6caf6d407f0102ef23b18f7c51df21f0baf94000f56-json.log",
	        "Name": "/addons-747503",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-747503:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-747503",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/11f58f8159fe4bcc9c388790d75da6c438cdd6b1e64ec9931ba42d5522190542-init/diff:/var/lib/docker/overlay2/e0471a8635b1d2c4e15ee92afa46c7d34f76188a5b6aa3cb3689b7cec908b9a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11f58f8159fe4bcc9c388790d75da6c438cdd6b1e64ec9931ba42d5522190542/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11f58f8159fe4bcc9c388790d75da6c438cdd6b1e64ec9931ba42d5522190542/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11f58f8159fe4bcc9c388790d75da6c438cdd6b1e64ec9931ba42d5522190542/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-747503",
	                "Source": "/var/lib/docker/volumes/addons-747503/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-747503",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-747503",
	                "name.minikube.sigs.k8s.io": "addons-747503",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2bf060fd849aa8a792c66482994fdba957bcf5fad9bd2decda24bd7d8500a7b5",
	            "SandboxKey": "/var/run/docker/netns/2bf060fd849a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34675"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34674"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34671"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34673"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34672"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-747503": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "64e1715d5e750e9daed359ac38e3073a5c93c82f8a5daf2e135f2d0b5be8da62",
	                    "EndpointID": "31ed3dc6d507db832465fc3d5d178d5ab6552b0ea16ea63ec1d876b06129484e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-747503",
	                        "038fb1234c5e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-747503 -n addons-747503
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-747503 logs -n 25: (1.449736483s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-784633   | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC |                     |
	|         | -p download-only-784633                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| delete  | -p download-only-784633                                                                     | download-only-784633   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| start   | -o=json --download-only                                                                     | download-only-161385   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | -p download-only-161385                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| delete  | -p download-only-161385                                                                     | download-only-161385   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| delete  | -p download-only-784633                                                                     | download-only-784633   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| delete  | -p download-only-161385                                                                     | download-only-161385   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| start   | --download-only -p                                                                          | download-docker-407942 | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | download-docker-407942                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-407942                                                                   | download-docker-407942 | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-562090   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | binary-mirror-562090                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39787                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-562090                                                                     | binary-mirror-562090   | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| addons  | enable dashboard -p                                                                         | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | addons-747503                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | addons-747503                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-747503 --wait=true                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-747503 ip                                                                            | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:49 UTC | 20 Apr 24 00:49 UTC |
	| addons  | addons-747503 addons disable                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:49 UTC | 20 Apr 24 00:49 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | -p addons-747503                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-747503 ssh cat                                                                       | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | /opt/local-path-provisioner/pvc-b29b3cd7-c850-4a4e-b0ba-8a8cc403a41d_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-747503 addons disable                                                                | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-747503 addons                                                                        | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-747503 addons                                                                        | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC | 20 Apr 24 00:50 UTC |
	|         | addons-747503                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-747503          | jenkins | v1.33.0 | 20 Apr 24 00:50 UTC |                     |
	|         | -p addons-747503                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 00:46:14
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 00:46:14.607015 1644261 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:46:14.607178 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:46:14.607208 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:46:14.607226 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:46:14.607498 1644261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	I0420 00:46:14.607984 1644261 out.go:298] Setting JSON to false
	I0420 00:46:14.608870 1644261 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":26921,"bootTime":1713547053,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0420 00:46:14.608940 1644261 start.go:139] virtualization:  
	I0420 00:46:14.612689 1644261 out.go:177] * [addons-747503] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0420 00:46:14.614357 1644261 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 00:46:14.616082 1644261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 00:46:14.614429 1644261 notify.go:220] Checking for updates...
	I0420 00:46:14.619849 1644261 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 00:46:14.621777 1644261 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	I0420 00:46:14.623523 1644261 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0420 00:46:14.625229 1644261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 00:46:14.627320 1644261 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 00:46:14.645723 1644261 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0420 00:46:14.645835 1644261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 00:46:14.712118 1644261 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-20 00:46:14.700723825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 00:46:14.712238 1644261 docker.go:295] overlay module found
	I0420 00:46:14.714333 1644261 out.go:177] * Using the docker driver based on user configuration
	I0420 00:46:14.715905 1644261 start.go:297] selected driver: docker
	I0420 00:46:14.715921 1644261 start.go:901] validating driver "docker" against <nil>
	I0420 00:46:14.715934 1644261 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 00:46:14.716574 1644261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 00:46:14.765511 1644261 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-20 00:46:14.755476473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 00:46:14.765687 1644261 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0420 00:46:14.765914 1644261 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:46:14.767783 1644261 out.go:177] * Using Docker driver with root privileges
	I0420 00:46:14.769372 1644261 cni.go:84] Creating CNI manager for ""
	I0420 00:46:14.769396 1644261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0420 00:46:14.769406 1644261 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0420 00:46:14.769486 1644261 start.go:340] cluster config:
	{Name:addons-747503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0420 00:46:14.771617 1644261 out.go:177] * Starting "addons-747503" primary control-plane node in "addons-747503" cluster
	I0420 00:46:14.773185 1644261 cache.go:121] Beginning downloading kic base image for docker with crio
	I0420 00:46:14.774855 1644261 out.go:177] * Pulling base image v0.0.43 ...
	I0420 00:46:14.776595 1644261 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:46:14.776634 1644261 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local docker daemon
	I0420 00:46:14.776648 1644261 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0420 00:46:14.776672 1644261 cache.go:56] Caching tarball of preloaded images
	I0420 00:46:14.776753 1644261 preload.go:173] Found /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0420 00:46:14.776764 1644261 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 00:46:14.777129 1644261 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/config.json ...
	I0420 00:46:14.777263 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/config.json: {Name:mkc5932488b9adc511b83497f974c2edc34e9770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:14.789608 1644261 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 to local cache
	I0420 00:46:14.789711 1644261 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local cache directory
	I0420 00:46:14.789728 1644261 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local cache directory, skipping pull
	I0420 00:46:14.789733 1644261 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 exists in cache, skipping pull
	I0420 00:46:14.789741 1644261 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 as a tarball
	I0420 00:46:14.789746 1644261 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 from local cache
	I0420 00:46:31.319259 1644261 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 from cached tarball
	I0420 00:46:31.319302 1644261 cache.go:194] Successfully downloaded all kic artifacts
	I0420 00:46:31.319332 1644261 start.go:360] acquireMachinesLock for addons-747503: {Name:mk90f80baada2f8c104726bc92d1956d63d494dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 00:46:31.319827 1644261 start.go:364] duration metric: took 471.731µs to acquireMachinesLock for "addons-747503"
	I0420 00:46:31.319867 1644261 start.go:93] Provisioning new machine with config: &{Name:addons-747503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:46:31.319953 1644261 start.go:125] createHost starting for "" (driver="docker")
	I0420 00:46:31.322194 1644261 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0420 00:46:31.322447 1644261 start.go:159] libmachine.API.Create for "addons-747503" (driver="docker")
	I0420 00:46:31.322484 1644261 client.go:168] LocalClient.Create starting
	I0420 00:46:31.322598 1644261 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem
	I0420 00:46:31.615216 1644261 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem
	I0420 00:46:31.818172 1644261 cli_runner.go:164] Run: docker network inspect addons-747503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0420 00:46:31.832341 1644261 cli_runner.go:211] docker network inspect addons-747503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0420 00:46:31.832434 1644261 network_create.go:281] running [docker network inspect addons-747503] to gather additional debugging logs...
	I0420 00:46:31.832456 1644261 cli_runner.go:164] Run: docker network inspect addons-747503
	W0420 00:46:31.845135 1644261 cli_runner.go:211] docker network inspect addons-747503 returned with exit code 1
	I0420 00:46:31.845171 1644261 network_create.go:284] error running [docker network inspect addons-747503]: docker network inspect addons-747503: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-747503 not found
	I0420 00:46:31.845184 1644261 network_create.go:286] output of [docker network inspect addons-747503]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-747503 not found
	
	** /stderr **
	I0420 00:46:31.845292 1644261 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0420 00:46:31.858385 1644261 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024d75e0}
	I0420 00:46:31.858427 1644261 network_create.go:124] attempt to create docker network addons-747503 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0420 00:46:31.858487 1644261 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-747503 addons-747503
	I0420 00:46:31.918669 1644261 network_create.go:108] docker network addons-747503 192.168.49.0/24 created
	I0420 00:46:31.918704 1644261 kic.go:121] calculated static IP "192.168.49.2" for the "addons-747503" container
	I0420 00:46:31.918779 1644261 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0420 00:46:31.932121 1644261 cli_runner.go:164] Run: docker volume create addons-747503 --label name.minikube.sigs.k8s.io=addons-747503 --label created_by.minikube.sigs.k8s.io=true
	I0420 00:46:31.946137 1644261 oci.go:103] Successfully created a docker volume addons-747503
	I0420 00:46:31.946230 1644261 cli_runner.go:164] Run: docker run --rm --name addons-747503-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747503 --entrypoint /usr/bin/test -v addons-747503:/var gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 -d /var/lib
	I0420 00:46:33.904376 1644261 cli_runner.go:217] Completed: docker run --rm --name addons-747503-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747503 --entrypoint /usr/bin/test -v addons-747503:/var gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 -d /var/lib: (1.958105111s)
	I0420 00:46:33.904409 1644261 oci.go:107] Successfully prepared a docker volume addons-747503
	I0420 00:46:33.904447 1644261 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:46:33.904466 1644261 kic.go:194] Starting extracting preloaded images to volume ...
	I0420 00:46:33.904548 1644261 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-747503:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 -I lz4 -xf /preloaded.tar -C /extractDir
	I0420 00:46:38.033459 1644261 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-747503:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 -I lz4 -xf /preloaded.tar -C /extractDir: (4.128855513s)
	I0420 00:46:38.033498 1644261 kic.go:203] duration metric: took 4.129027815s to extract preloaded images to volume ...
	W0420 00:46:38.033666 1644261 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0420 00:46:38.033783 1644261 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0420 00:46:38.092961 1644261 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-747503 --name addons-747503 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-747503 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-747503 --network addons-747503 --ip 192.168.49.2 --volume addons-747503:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737
	I0420 00:46:38.431321 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Running}}
	I0420 00:46:38.449111 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:46:38.473100 1644261 cli_runner.go:164] Run: docker exec addons-747503 stat /var/lib/dpkg/alternatives/iptables
	I0420 00:46:38.539136 1644261 oci.go:144] the created container "addons-747503" has a running status.
	I0420 00:46:38.539177 1644261 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa...
	I0420 00:46:38.988697 1644261 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0420 00:46:39.013673 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:46:39.036196 1644261 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0420 00:46:39.036217 1644261 kic_runner.go:114] Args: [docker exec --privileged addons-747503 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0420 00:46:39.118596 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:46:39.142860 1644261 machine.go:94] provisionDockerMachine start ...
	I0420 00:46:39.142976 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:39.167812 1644261 main.go:141] libmachine: Using SSH client type: native
	I0420 00:46:39.168086 1644261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34675 <nil> <nil>}
	I0420 00:46:39.168096 1644261 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 00:46:39.349580 1644261 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-747503
	
	I0420 00:46:39.349601 1644261 ubuntu.go:169] provisioning hostname "addons-747503"
	I0420 00:46:39.349678 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:39.377796 1644261 main.go:141] libmachine: Using SSH client type: native
	I0420 00:46:39.378035 1644261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34675 <nil> <nil>}
	I0420 00:46:39.378046 1644261 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-747503 && echo "addons-747503" | sudo tee /etc/hostname
	I0420 00:46:39.558224 1644261 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-747503
	
	I0420 00:46:39.558419 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:39.575363 1644261 main.go:141] libmachine: Using SSH client type: native
	I0420 00:46:39.575600 1644261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34675 <nil> <nil>}
	I0420 00:46:39.575617 1644261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-747503' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-747503/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-747503' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 00:46:39.717750 1644261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 00:46:39.717780 1644261 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18703-1638187/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-1638187/.minikube}
	I0420 00:46:39.717798 1644261 ubuntu.go:177] setting up certificates
	I0420 00:46:39.717807 1644261 provision.go:84] configureAuth start
	I0420 00:46:39.717871 1644261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747503
	I0420 00:46:39.734066 1644261 provision.go:143] copyHostCerts
	I0420 00:46:39.734147 1644261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem (1082 bytes)
	I0420 00:46:39.734277 1644261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem (1123 bytes)
	I0420 00:46:39.734339 1644261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem (1675 bytes)
	I0420 00:46:39.734390 1644261 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem org=jenkins.addons-747503 san=[127.0.0.1 192.168.49.2 addons-747503 localhost minikube]
	I0420 00:46:40.231219 1644261 provision.go:177] copyRemoteCerts
	I0420 00:46:40.231290 1644261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 00:46:40.231331 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.247276 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:40.346662 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 00:46:40.371651 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0420 00:46:40.396149 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0420 00:46:40.421133 1644261 provision.go:87] duration metric: took 703.312596ms to configureAuth
	I0420 00:46:40.421162 1644261 ubuntu.go:193] setting minikube options for container-runtime
	I0420 00:46:40.421357 1644261 config.go:182] Loaded profile config "addons-747503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:46:40.421463 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.436686 1644261 main.go:141] libmachine: Using SSH client type: native
	I0420 00:46:40.436931 1644261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34675 <nil> <nil>}
	I0420 00:46:40.436947 1644261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 00:46:40.681193 1644261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 00:46:40.681219 1644261 machine.go:97] duration metric: took 1.538331373s to provisionDockerMachine
	I0420 00:46:40.681230 1644261 client.go:171] duration metric: took 9.358739082s to LocalClient.Create
	I0420 00:46:40.681274 1644261 start.go:167] duration metric: took 9.358813131s to libmachine.API.Create "addons-747503"
	I0420 00:46:40.681289 1644261 start.go:293] postStartSetup for "addons-747503" (driver="docker")
	I0420 00:46:40.681301 1644261 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 00:46:40.681386 1644261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 00:46:40.681463 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.698546 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:40.802764 1644261 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 00:46:40.805936 1644261 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0420 00:46:40.805975 1644261 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0420 00:46:40.806008 1644261 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0420 00:46:40.806022 1644261 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0420 00:46:40.806034 1644261 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-1638187/.minikube/addons for local assets ...
	I0420 00:46:40.806115 1644261 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-1638187/.minikube/files for local assets ...
	I0420 00:46:40.806144 1644261 start.go:296] duration metric: took 124.848597ms for postStartSetup
	I0420 00:46:40.806464 1644261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747503
	I0420 00:46:40.821587 1644261 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/config.json ...
	I0420 00:46:40.821882 1644261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 00:46:40.821936 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.835949 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:40.934325 1644261 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0420 00:46:40.938838 1644261 start.go:128] duration metric: took 9.618867781s to createHost
	I0420 00:46:40.938860 1644261 start.go:83] releasing machines lock for "addons-747503", held for 9.61901377s
	I0420 00:46:40.938948 1644261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-747503
	I0420 00:46:40.954767 1644261 ssh_runner.go:195] Run: cat /version.json
	I0420 00:46:40.954809 1644261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 00:46:40.954838 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.954856 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:46:40.973750 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:40.987873 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:46:41.073383 1644261 ssh_runner.go:195] Run: systemctl --version
	I0420 00:46:41.192077 1644261 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 00:46:41.344667 1644261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0420 00:46:41.349255 1644261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:46:41.370360 1644261 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0420 00:46:41.370464 1644261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 00:46:41.403068 1644261 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0420 00:46:41.403146 1644261 start.go:494] detecting cgroup driver to use...
	I0420 00:46:41.403194 1644261 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0420 00:46:41.403271 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 00:46:41.419319 1644261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 00:46:41.431512 1644261 docker.go:217] disabling cri-docker service (if available) ...
	I0420 00:46:41.431608 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 00:46:41.446179 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 00:46:41.465996 1644261 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 00:46:41.554380 1644261 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 00:46:41.655130 1644261 docker.go:233] disabling docker service ...
	I0420 00:46:41.655197 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 00:46:41.675820 1644261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 00:46:41.688324 1644261 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 00:46:41.772551 1644261 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 00:46:41.869236 1644261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 00:46:41.880923 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 00:46:41.897306 1644261 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 00:46:41.897393 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.908466 1644261 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 00:46:41.908556 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.919831 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.930232 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.940033 1644261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 00:46:41.949454 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.959319 1644261 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.974839 1644261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 00:46:41.984469 1644261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 00:46:41.993979 1644261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 00:46:42.008022 1644261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:46:42.111879 1644261 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 00:46:42.238392 1644261 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 00:46:42.238485 1644261 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 00:46:42.242714 1644261 start.go:562] Will wait 60s for crictl version
	I0420 00:46:42.242782 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:46:42.246739 1644261 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 00:46:42.289378 1644261 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0420 00:46:42.289488 1644261 ssh_runner.go:195] Run: crio --version
	I0420 00:46:42.333568 1644261 ssh_runner.go:195] Run: crio --version
	I0420 00:46:42.377897 1644261 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0420 00:46:42.379595 1644261 cli_runner.go:164] Run: docker network inspect addons-747503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0420 00:46:42.392523 1644261 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0420 00:46:42.396287 1644261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:46:42.406719 1644261 kubeadm.go:877] updating cluster {Name:addons-747503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 00:46:42.406844 1644261 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:46:42.406909 1644261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:46:42.492542 1644261 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:46:42.492568 1644261 crio.go:433] Images already preloaded, skipping extraction
	I0420 00:46:42.492648 1644261 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 00:46:42.532591 1644261 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 00:46:42.532618 1644261 cache_images.go:84] Images are preloaded, skipping loading
	I0420 00:46:42.532628 1644261 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0 crio true true} ...
	I0420 00:46:42.532741 1644261 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-747503 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 00:46:42.532824 1644261 ssh_runner.go:195] Run: crio config
	I0420 00:46:42.580609 1644261 cni.go:84] Creating CNI manager for ""
	I0420 00:46:42.580639 1644261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0420 00:46:42.580660 1644261 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 00:46:42.580718 1644261 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-747503 NodeName:addons-747503 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 00:46:42.580886 1644261 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-747503"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 00:46:42.580966 1644261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 00:46:42.590117 1644261 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 00:46:42.590190 1644261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0420 00:46:42.599044 1644261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0420 00:46:42.617636 1644261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 00:46:42.635779 1644261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0420 00:46:42.653757 1644261 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0420 00:46:42.657403 1644261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 00:46:42.668479 1644261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:46:42.748825 1644261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:46:42.762791 1644261 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503 for IP: 192.168.49.2
	I0420 00:46:42.762861 1644261 certs.go:194] generating shared ca certs ...
	I0420 00:46:42.762893 1644261 certs.go:226] acquiring lock for ca certs: {Name:mkf02d2bd3e0f29e12b7cec7c5b9a48566830288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:42.763075 1644261 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key
	I0420 00:46:42.952911 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt ...
	I0420 00:46:42.952946 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt: {Name:mk49370c70b4ffc1cbcd1227f487de3de2af3ed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:42.953182 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key ...
	I0420 00:46:42.953200 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key: {Name:mk2877a201a5ba28e426f127f32ae06fa0033f63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:42.953299 1644261 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key
	I0420 00:46:43.525747 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt ...
	I0420 00:46:43.525778 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt: {Name:mk695cd51a6cd9c3c06377fb3cd1872da426efc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:43.527292 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key ...
	I0420 00:46:43.527309 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key: {Name:mkef065e7c04a8c6100720cceafeab1ff9cb96b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:43.527942 1644261 certs.go:256] generating profile certs ...
	I0420 00:46:43.528022 1644261 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.key
	I0420 00:46:43.528041 1644261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt with IP's: []
	I0420 00:46:43.960821 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt ...
	I0420 00:46:43.960852 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: {Name:mk84a033ba366df9ffa0dfef7e831bb3e5c0f737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:43.961043 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.key ...
	I0420 00:46:43.961056 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.key: {Name:mk83bfd7e187e91bdb04631dbc1011de4d92fc28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:43.961606 1644261 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key.e2a49c09
	I0420 00:46:43.961631 1644261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt.e2a49c09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0420 00:46:44.377939 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt.e2a49c09 ...
	I0420 00:46:44.377977 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt.e2a49c09: {Name:mk0a88b731f275f786bbac6d601f7f9fda080c92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:44.378572 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key.e2a49c09 ...
	I0420 00:46:44.378591 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key.e2a49c09: {Name:mkd4e59169d95ea0e222dd2e9bcaa9e7684c6506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:44.379246 1644261 certs.go:381] copying /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt.e2a49c09 -> /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt
	I0420 00:46:44.379343 1644261 certs.go:385] copying /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key.e2a49c09 -> /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key
	I0420 00:46:44.379402 1644261 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.key
	I0420 00:46:44.379425 1644261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.crt with IP's: []
	I0420 00:46:45.155458 1644261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.crt ...
	I0420 00:46:45.155496 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.crt: {Name:mk297ed885f196ef52980a6bcd4c4dd306202aca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:45.155722 1644261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.key ...
	I0420 00:46:45.155739 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.key: {Name:mk1a0c4c69f4e1c4e307aafc0f32c462980fe679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:45.155970 1644261 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem (1679 bytes)
	I0420 00:46:45.156033 1644261 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem (1082 bytes)
	I0420 00:46:45.156076 1644261 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem (1123 bytes)
	I0420 00:46:45.156120 1644261 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem (1675 bytes)
	I0420 00:46:45.156827 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 00:46:45.185776 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 00:46:45.215921 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 00:46:45.246659 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0420 00:46:45.276336 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0420 00:46:45.302931 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0420 00:46:45.330184 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 00:46:45.355925 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 00:46:45.380042 1644261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 00:46:45.404816 1644261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 00:46:45.422615 1644261 ssh_runner.go:195] Run: openssl version
	I0420 00:46:45.427939 1644261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 00:46:45.437580 1644261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:46:45.441275 1644261 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:46 /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:46:45.441378 1644261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 00:46:45.448324 1644261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 00:46:45.457860 1644261 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 00:46:45.461194 1644261 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 00:46:45.461309 1644261 kubeadm.go:391] StartCluster: {Name:addons-747503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-747503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:46:45.461403 1644261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 00:46:45.461467 1644261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 00:46:45.503476 1644261 cri.go:89] found id: ""
	I0420 00:46:45.503547 1644261 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0420 00:46:45.512391 1644261 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0420 00:46:45.521198 1644261 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0420 00:46:45.521290 1644261 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0420 00:46:45.530277 1644261 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0420 00:46:45.530297 1644261 kubeadm.go:156] found existing configuration files:
	
	I0420 00:46:45.530357 1644261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0420 00:46:45.539187 1644261 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0420 00:46:45.539295 1644261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0420 00:46:45.547666 1644261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0420 00:46:45.556291 1644261 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0420 00:46:45.556360 1644261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0420 00:46:45.564678 1644261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0420 00:46:45.573450 1644261 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0420 00:46:45.573517 1644261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0420 00:46:45.582508 1644261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0420 00:46:45.591500 1644261 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0420 00:46:45.591577 1644261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0420 00:46:45.600866 1644261 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0420 00:46:45.645453 1644261 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0420 00:46:45.645777 1644261 kubeadm.go:309] [preflight] Running pre-flight checks
	I0420 00:46:45.683542 1644261 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0420 00:46:45.683657 1644261 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1058-aws
	I0420 00:46:45.683722 1644261 kubeadm.go:309] OS: Linux
	I0420 00:46:45.683789 1644261 kubeadm.go:309] CGROUPS_CPU: enabled
	I0420 00:46:45.683865 1644261 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0420 00:46:45.683931 1644261 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0420 00:46:45.684007 1644261 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0420 00:46:45.684075 1644261 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0420 00:46:45.684148 1644261 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0420 00:46:45.684214 1644261 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0420 00:46:45.684312 1644261 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0420 00:46:45.684387 1644261 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0420 00:46:45.759423 1644261 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0420 00:46:45.759626 1644261 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0420 00:46:45.759767 1644261 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0420 00:46:46.002291 1644261 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0420 00:46:46.007243 1644261 out.go:204]   - Generating certificates and keys ...
	I0420 00:46:46.007483 1644261 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0420 00:46:46.007612 1644261 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0420 00:46:46.884914 1644261 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0420 00:46:47.257057 1644261 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0420 00:46:47.525713 1644261 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0420 00:46:48.004926 1644261 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0420 00:46:48.760658 1644261 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0420 00:46:48.761028 1644261 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-747503 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0420 00:46:49.351744 1644261 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0420 00:46:49.352075 1644261 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-747503 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0420 00:46:50.201612 1644261 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0420 00:46:51.561008 1644261 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0420 00:46:51.893672 1644261 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0420 00:46:51.893960 1644261 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0420 00:46:52.391610 1644261 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0420 00:46:52.832785 1644261 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0420 00:46:53.450795 1644261 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0420 00:46:54.163371 1644261 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0420 00:46:54.525910 1644261 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0420 00:46:54.526499 1644261 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0420 00:46:54.530099 1644261 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0420 00:46:54.533698 1644261 out.go:204]   - Booting up control plane ...
	I0420 00:46:54.533810 1644261 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0420 00:46:54.533896 1644261 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0420 00:46:54.534330 1644261 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0420 00:46:54.544911 1644261 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0420 00:46:54.545769 1644261 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0420 00:46:54.545990 1644261 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0420 00:46:54.640341 1644261 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0420 00:46:54.640434 1644261 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0420 00:46:56.142567 1644261 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.502003018s
	I0420 00:46:56.142654 1644261 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0420 00:47:01.644242 1644261 kubeadm.go:309] [api-check] The API server is healthy after 5.501943699s
	I0420 00:47:01.664659 1644261 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0420 00:47:01.681300 1644261 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0420 00:47:01.708476 1644261 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0420 00:47:01.708705 1644261 kubeadm.go:309] [mark-control-plane] Marking the node addons-747503 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0420 00:47:01.721036 1644261 kubeadm.go:309] [bootstrap-token] Using token: gydxtq.1vtpvmdo173k1bfx
	I0420 00:47:01.723573 1644261 out.go:204]   - Configuring RBAC rules ...
	I0420 00:47:01.723699 1644261 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0420 00:47:01.728901 1644261 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0420 00:47:01.737904 1644261 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0420 00:47:01.741657 1644261 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0420 00:47:01.745445 1644261 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0420 00:47:01.750064 1644261 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0420 00:47:02.051404 1644261 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0420 00:47:02.494567 1644261 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0420 00:47:03.051115 1644261 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0420 00:47:03.052415 1644261 kubeadm.go:309] 
	I0420 00:47:03.052490 1644261 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0420 00:47:03.052501 1644261 kubeadm.go:309] 
	I0420 00:47:03.052583 1644261 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0420 00:47:03.052596 1644261 kubeadm.go:309] 
	I0420 00:47:03.052621 1644261 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0420 00:47:03.052682 1644261 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0420 00:47:03.052735 1644261 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0420 00:47:03.052744 1644261 kubeadm.go:309] 
	I0420 00:47:03.052796 1644261 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0420 00:47:03.052805 1644261 kubeadm.go:309] 
	I0420 00:47:03.052851 1644261 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0420 00:47:03.052861 1644261 kubeadm.go:309] 
	I0420 00:47:03.052912 1644261 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0420 00:47:03.052987 1644261 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0420 00:47:03.053062 1644261 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0420 00:47:03.053074 1644261 kubeadm.go:309] 
	I0420 00:47:03.053155 1644261 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0420 00:47:03.053232 1644261 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0420 00:47:03.053241 1644261 kubeadm.go:309] 
	I0420 00:47:03.053322 1644261 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token gydxtq.1vtpvmdo173k1bfx \
	I0420 00:47:03.053425 1644261 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9c904917a7f9caa355a71a4c03ca34b03d28761d5d47f15de292975c6da7288d \
	I0420 00:47:03.053449 1644261 kubeadm.go:309] 	--control-plane 
	I0420 00:47:03.053475 1644261 kubeadm.go:309] 
	I0420 00:47:03.053587 1644261 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0420 00:47:03.053596 1644261 kubeadm.go:309] 
	I0420 00:47:03.053675 1644261 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token gydxtq.1vtpvmdo173k1bfx \
	I0420 00:47:03.053777 1644261 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9c904917a7f9caa355a71a4c03ca34b03d28761d5d47f15de292975c6da7288d 
	I0420 00:47:03.056801 1644261 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1058-aws\n", err: exit status 1
	I0420 00:47:03.056915 1644261 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0420 00:47:03.056944 1644261 cni.go:84] Creating CNI manager for ""
	I0420 00:47:03.056957 1644261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0420 00:47:03.060887 1644261 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0420 00:47:03.063358 1644261 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0420 00:47:03.067117 1644261 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0420 00:47:03.067135 1644261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0420 00:47:03.086237 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0420 00:47:03.390022 1644261 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0420 00:47:03.390173 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:03.390313 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-747503 minikube.k8s.io/updated_at=2024_04_20T00_47_03_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e minikube.k8s.io/name=addons-747503 minikube.k8s.io/primary=true
	I0420 00:47:03.585439 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:03.585498 1644261 ops.go:34] apiserver oom_adj: -16
	I0420 00:47:04.085626 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:04.586529 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:05.086557 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:05.585699 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:06.085654 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:06.585771 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:07.085576 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:07.586155 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:08.086096 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:08.585649 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:09.086504 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:09.585610 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:10.085668 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:10.586519 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:11.086404 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:11.586239 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:12.085669 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:12.586287 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:13.086534 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:13.586536 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:14.085702 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:14.586138 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:15.085794 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:15.586433 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:16.085650 1644261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0420 00:47:16.179804 1644261 kubeadm.go:1107] duration metric: took 12.78970109s to wait for elevateKubeSystemPrivileges
	W0420 00:47:16.179838 1644261 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0420 00:47:16.179846 1644261 kubeadm.go:393] duration metric: took 30.718541399s to StartCluster
	I0420 00:47:16.179861 1644261 settings.go:142] acquiring lock: {Name:mk38dc124731a3de0f512758a89f5557db305d6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:47:16.180388 1644261 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 00:47:16.180815 1644261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/kubeconfig: {Name:mk33979dc7705003abaa608c8031c04a91a05c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:47:16.181428 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0420 00:47:16.181453 1644261 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 00:47:16.183715 1644261 out.go:177] * Verifying Kubernetes components...
	I0420 00:47:16.181718 1644261 config.go:182] Loaded profile config "addons-747503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:47:16.181730 1644261 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0420 00:47:16.185704 1644261 addons.go:69] Setting yakd=true in profile "addons-747503"
	I0420 00:47:16.185734 1644261 addons.go:234] Setting addon yakd=true in "addons-747503"
	I0420 00:47:16.185768 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.186268 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.186447 1644261 addons.go:69] Setting ingress-dns=true in profile "addons-747503"
	I0420 00:47:16.186469 1644261 addons.go:234] Setting addon ingress-dns=true in "addons-747503"
	I0420 00:47:16.186517 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.186921 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.187248 1644261 addons.go:69] Setting inspektor-gadget=true in profile "addons-747503"
	I0420 00:47:16.187275 1644261 addons.go:234] Setting addon inspektor-gadget=true in "addons-747503"
	I0420 00:47:16.187315 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.187697 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.187894 1644261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 00:47:16.188126 1644261 addons.go:69] Setting cloud-spanner=true in profile "addons-747503"
	I0420 00:47:16.188155 1644261 addons.go:234] Setting addon cloud-spanner=true in "addons-747503"
	I0420 00:47:16.188175 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.188546 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.191365 1644261 addons.go:69] Setting metrics-server=true in profile "addons-747503"
	I0420 00:47:16.191404 1644261 addons.go:234] Setting addon metrics-server=true in "addons-747503"
	I0420 00:47:16.191442 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.191858 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.195564 1644261 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-747503"
	I0420 00:47:16.195639 1644261 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-747503"
	I0420 00:47:16.195677 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.196140 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.196393 1644261 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-747503"
	I0420 00:47:16.196425 1644261 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-747503"
	I0420 00:47:16.196458 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.196858 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.212712 1644261 addons.go:69] Setting registry=true in profile "addons-747503"
	I0420 00:47:16.212761 1644261 addons.go:234] Setting addon registry=true in "addons-747503"
	I0420 00:47:16.212800 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.213262 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.213595 1644261 addons.go:69] Setting default-storageclass=true in profile "addons-747503"
	I0420 00:47:16.213636 1644261 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-747503"
	I0420 00:47:16.213917 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.229059 1644261 addons.go:69] Setting storage-provisioner=true in profile "addons-747503"
	I0420 00:47:16.229107 1644261 addons.go:234] Setting addon storage-provisioner=true in "addons-747503"
	I0420 00:47:16.229144 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.229674 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.238837 1644261 addons.go:69] Setting gcp-auth=true in profile "addons-747503"
	I0420 00:47:16.238899 1644261 mustload.go:65] Loading cluster: addons-747503
	I0420 00:47:16.239098 1644261 config.go:182] Loaded profile config "addons-747503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 00:47:16.239349 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.247529 1644261 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-747503"
	I0420 00:47:16.247581 1644261 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-747503"
	I0420 00:47:16.247904 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.260923 1644261 addons.go:69] Setting ingress=true in profile "addons-747503"
	I0420 00:47:16.261022 1644261 addons.go:234] Setting addon ingress=true in "addons-747503"
	I0420 00:47:16.261118 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.261627 1644261 addons.go:69] Setting volumesnapshots=true in profile "addons-747503"
	I0420 00:47:16.261657 1644261 addons.go:234] Setting addon volumesnapshots=true in "addons-747503"
	I0420 00:47:16.261681 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.262076 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.269352 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.380646 1644261 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0420 00:47:16.389090 1644261 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0420 00:47:16.389162 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0420 00:47:16.389257 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.396585 1644261 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0420 00:47:16.413756 1644261 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0420 00:47:16.415782 1644261 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0420 00:47:16.415807 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0420 00:47:16.415883 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.413893 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0420 00:47:16.401466 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.403036 1644261 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-747503"
	I0420 00:47:16.418273 1644261 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0420 00:47:16.418280 1644261 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0420 00:47:16.418293 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0420 00:47:16.420636 1644261 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0420 00:47:16.420642 1644261 out.go:177]   - Using image docker.io/registry:2.8.3
	I0420 00:47:16.420647 1644261 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0420 00:47:16.425004 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0420 00:47:16.425041 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.426776 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.426994 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.441701 1644261 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0420 00:47:16.441728 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0420 00:47:16.441815 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.442302 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0420 00:47:16.445183 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0420 00:47:16.442535 1644261 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0420 00:47:16.443701 1644261 addons.go:234] Setting addon default-storageclass=true in "addons-747503"
	I0420 00:47:16.448904 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0420 00:47:16.448990 1644261 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 00:47:16.449022 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:16.451018 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:16.451164 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0420 00:47:16.453010 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0420 00:47:16.451430 1644261 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0420 00:47:16.451490 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0420 00:47:16.451506 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0420 00:47:16.451526 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0420 00:47:16.455094 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0420 00:47:16.456879 1644261 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0420 00:47:16.456900 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0420 00:47:16.456978 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.458542 1644261 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0420 00:47:16.458614 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.474496 1644261 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0420 00:47:16.474512 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0420 00:47:16.474577 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.477483 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.461983 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0420 00:47:16.500835 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0420 00:47:16.502674 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0420 00:47:16.504324 1644261 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0420 00:47:16.506235 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0420 00:47:16.506258 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0420 00:47:16.506328 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.517701 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.462052 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.601782 1644261 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0420 00:47:16.609614 1644261 out.go:177]   - Using image docker.io/busybox:stable
	I0420 00:47:16.605998 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.601684 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.609993 1644261 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0420 00:47:16.611424 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0420 00:47:16.611499 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.620520 1644261 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0420 00:47:16.617584 1644261 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0420 00:47:16.624839 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0420 00:47:16.624942 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.638699 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.639612 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.641927 1644261 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0420 00:47:16.641980 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0420 00:47:16.642067 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:16.668331 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0420 00:47:16.668981 1644261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 00:47:16.669308 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.673621 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.676969 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.677828 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.699192 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.730795 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.732606 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.742311 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:16.780716 1644261 node_ready.go:35] waiting up to 6m0s for node "addons-747503" to be "Ready" ...
	I0420 00:47:16.921085 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0420 00:47:16.921116 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0420 00:47:16.991932 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0420 00:47:16.996333 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0420 00:47:17.100495 1644261 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0420 00:47:17.100519 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0420 00:47:17.112923 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0420 00:47:17.112950 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0420 00:47:17.120377 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0420 00:47:17.120403 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0420 00:47:17.189815 1644261 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0420 00:47:17.189844 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0420 00:47:17.207623 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0420 00:47:17.210877 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0420 00:47:17.219064 1644261 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0420 00:47:17.219098 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0420 00:47:17.227561 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0420 00:47:17.227588 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0420 00:47:17.268204 1644261 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0420 00:47:17.268232 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0420 00:47:17.272800 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0420 00:47:17.272833 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0420 00:47:17.275286 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0420 00:47:17.303754 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0420 00:47:17.334811 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0420 00:47:17.334879 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0420 00:47:17.341610 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0420 00:47:17.394607 1644261 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0420 00:47:17.394678 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0420 00:47:17.407225 1644261 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0420 00:47:17.407292 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0420 00:47:17.411478 1644261 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0420 00:47:17.411505 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0420 00:47:17.412384 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0420 00:47:17.412410 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0420 00:47:17.459411 1644261 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 00:47:17.459482 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0420 00:47:17.520995 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0420 00:47:17.521029 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0420 00:47:17.562920 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0420 00:47:17.570793 1644261 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0420 00:47:17.570820 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0420 00:47:17.628250 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0420 00:47:17.628287 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0420 00:47:17.635304 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0420 00:47:17.675653 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0420 00:47:17.675685 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0420 00:47:17.677655 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0420 00:47:17.692234 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0420 00:47:17.692263 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0420 00:47:17.785863 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0420 00:47:17.785891 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0420 00:47:17.790678 1644261 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0420 00:47:17.790712 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0420 00:47:17.836414 1644261 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0420 00:47:17.836445 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0420 00:47:17.897210 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0420 00:47:17.956384 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0420 00:47:17.956418 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0420 00:47:17.972928 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0420 00:47:17.972958 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0420 00:47:18.058087 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0420 00:47:18.058115 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0420 00:47:18.122175 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0420 00:47:18.122210 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0420 00:47:18.196857 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0420 00:47:18.196898 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0420 00:47:18.223649 1644261 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0420 00:47:18.223683 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0420 00:47:18.308893 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0420 00:47:18.308918 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0420 00:47:18.317127 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0420 00:47:18.431462 1644261 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0420 00:47:18.431507 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0420 00:47:18.590897 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0420 00:47:18.976068 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:20.096220 1644261 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.427813593s)
	I0420 00:47:20.096422 1644261 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0420 00:47:20.703392 1644261 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-747503" context rescaled to 1 replicas
	I0420 00:47:21.021961 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.02998807s)
	I0420 00:47:21.316695 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:22.188682 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.192309833s)
	I0420 00:47:22.188774 1644261 addons.go:470] Verifying addon ingress=true in "addons-747503"
	I0420 00:47:22.192225 1644261 out.go:177] * Verifying ingress addon...
	I0420 00:47:22.188943 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.981293644s)
	I0420 00:47:22.189126 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.978079286s)
	I0420 00:47:22.189179 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.91383171s)
	I0420 00:47:22.189212 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.88538872s)
	I0420 00:47:22.189275 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.847605776s)
	I0420 00:47:22.189361 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.554030452s)
	I0420 00:47:22.189389 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.51171099s)
	I0420 00:47:22.189421 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.626351674s)
	I0420 00:47:22.192770 1644261 addons.go:470] Verifying addon metrics-server=true in "addons-747503"
	I0420 00:47:22.192870 1644261 addons.go:470] Verifying addon registry=true in "addons-747503"
	I0420 00:47:22.196209 1644261 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0420 00:47:22.197884 1644261 out.go:177] * Verifying registry addon...
	I0420 00:47:22.200704 1644261 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0420 00:47:22.197983 1644261 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-747503 service yakd-dashboard -n yakd-dashboard
	
	I0420 00:47:22.211783 1644261 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0420 00:47:22.211886 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:22.214456 1644261 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0420 00:47:22.214527 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0420 00:47:22.238344 1644261 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0420 00:47:22.380821 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.063648234s)
	I0420 00:47:22.381083 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.483841163s)
	W0420 00:47:22.381139 1644261 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0420 00:47:22.381175 1644261 retry.go:31] will retry after 166.820915ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0420 00:47:22.548877 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0420 00:47:22.600574 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.009625669s)
	I0420 00:47:22.600671 1644261 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-747503"
	I0420 00:47:22.603229 1644261 out.go:177] * Verifying csi-hostpath-driver addon...
	I0420 00:47:22.606034 1644261 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0420 00:47:22.670601 1644261 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0420 00:47:22.670671 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:22.720534 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:22.725459 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:23.175823 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:23.228069 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:23.229636 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:23.610334 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:23.701866 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:23.705088 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:23.785324 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:24.112035 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:24.205349 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:24.210857 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:24.611934 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:24.703020 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:24.705704 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:25.111349 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:25.203668 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:25.206311 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:25.544549 1644261 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0420 00:47:25.544657 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:25.569718 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:25.611543 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:25.707373 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:25.711627 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:25.751589 1644261 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0420 00:47:25.791122 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:25.816447 1644261 addons.go:234] Setting addon gcp-auth=true in "addons-747503"
	I0420 00:47:25.816499 1644261 host.go:66] Checking if "addons-747503" exists ...
	I0420 00:47:25.816957 1644261 cli_runner.go:164] Run: docker container inspect addons-747503 --format={{.State.Status}}
	I0420 00:47:25.845517 1644261 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0420 00:47:25.845586 1644261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-747503
	I0420 00:47:25.877309 1644261 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.328324528s)
	I0420 00:47:25.877697 1644261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34675 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/addons-747503/id_rsa Username:docker}
	I0420 00:47:25.975676 1644261 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0420 00:47:25.978070 1644261 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0420 00:47:25.980578 1644261 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0420 00:47:25.980605 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0420 00:47:25.999059 1644261 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0420 00:47:25.999087 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0420 00:47:26.023351 1644261 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0420 00:47:26.023374 1644261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0420 00:47:26.045389 1644261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0420 00:47:26.110669 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:26.202148 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:26.205465 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:26.611994 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:26.731236 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:26.732300 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:26.789639 1644261 addons.go:470] Verifying addon gcp-auth=true in "addons-747503"
	I0420 00:47:26.792415 1644261 out.go:177] * Verifying gcp-auth addon...
	I0420 00:47:26.795034 1644261 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0420 00:47:26.801627 1644261 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0420 00:47:26.801647 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:27.111220 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:27.202244 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:27.206342 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:27.299355 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:27.611552 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:27.704193 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:27.706819 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:27.800094 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:28.112242 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:28.203543 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:28.205771 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:28.285015 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:28.299535 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:28.610656 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:28.701793 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:28.705035 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:28.799716 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:29.110925 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:29.202332 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:29.205716 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:29.298341 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:29.610673 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:29.701621 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:29.706306 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:29.798919 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:30.112640 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:30.204537 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:30.206706 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:30.299144 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:30.610522 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:30.702150 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:30.704677 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:30.784363 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:30.799016 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:31.110019 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:31.202111 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:31.205063 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:31.299204 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:31.611514 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:31.701803 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:31.704838 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:31.798703 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:32.111116 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:32.202000 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:32.204400 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:32.298928 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:32.611045 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:32.702323 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:32.705822 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:32.789058 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:32.798725 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:33.110424 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:33.204110 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:33.205044 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:33.298391 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:33.610022 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:33.701936 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:33.705447 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:33.798469 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:34.111023 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:34.201972 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:34.204989 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:34.298734 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:34.611132 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:34.702210 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:34.704568 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:34.798686 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:35.114336 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:35.201894 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:35.204405 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:35.284372 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:35.299112 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:35.610506 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:35.703095 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:35.704656 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:35.798675 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:36.111001 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:36.202000 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:36.205085 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:36.298568 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:36.610614 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:36.701867 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:36.705335 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:36.798308 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:37.110714 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:37.201367 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:37.205486 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:37.298446 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:37.610698 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:37.701750 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:37.703884 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:37.784367 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:37.798325 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:38.110748 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:38.201466 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:38.204655 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:38.298691 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:38.610939 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:38.702667 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:38.706391 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:38.798284 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:39.110263 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:39.202152 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:39.205507 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:39.298283 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:39.610642 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:39.701439 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:39.704765 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:39.784473 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:39.798580 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:40.111665 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:40.201902 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:40.204092 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:40.298312 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:40.611241 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:40.701932 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:40.704592 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:40.798190 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:41.110629 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:41.202188 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:41.204072 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:41.298945 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:41.611166 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:41.702282 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:41.704810 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:41.784563 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:41.798789 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:42.110754 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:42.202995 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:42.205235 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:42.299222 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:42.611146 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:42.702723 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:42.705024 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:42.798378 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:43.110641 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:43.202122 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:43.204551 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:43.299823 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:43.610371 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:43.702371 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:43.705119 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:43.798537 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:44.110802 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:44.201908 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:44.204050 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:44.283628 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:44.299022 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:44.611069 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:44.702139 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:44.704447 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:44.798400 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:45.110913 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:45.203669 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:45.207125 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:45.299652 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:45.611596 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:45.702314 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:45.704145 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:45.798613 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:46.111305 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:46.202398 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:46.207349 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:46.284676 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:46.299304 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:46.610152 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:46.701326 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:46.704270 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:46.798790 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:47.110635 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:47.201792 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:47.203647 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:47.298677 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:47.610877 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:47.701753 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:47.704933 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:47.798847 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:48.111425 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:48.201251 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:48.204521 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:48.298577 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:48.615418 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:48.703331 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:48.707352 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:48.784673 1644261 node_ready.go:53] node "addons-747503" has status "Ready":"False"
	I0420 00:47:48.800104 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:49.112682 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:49.201597 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:49.204732 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:49.298628 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:49.610976 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:49.701739 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:49.705321 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:49.798577 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:50.111739 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:50.201941 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:50.203737 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:50.299042 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:50.610954 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:50.709155 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:50.709916 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:50.799587 1644261 node_ready.go:49] node "addons-747503" has status "Ready":"True"
	I0420 00:47:50.799614 1644261 node_ready.go:38] duration metric: took 34.018855397s for node "addons-747503" to be "Ready" ...
	I0420 00:47:50.799624 1644261 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:47:50.839199 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:50.842280 1644261 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pj8wd" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.121354 1644261 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0420 00:47:51.121385 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:51.265128 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:51.306316 1644261 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0420 00:47:51.306343 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:51.316679 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:51.646936 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:51.738236 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:51.738864 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:51.825253 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:51.869885 1644261 pod_ready.go:92] pod "coredns-7db6d8ff4d-pj8wd" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.869905 1644261 pod_ready.go:81] duration metric: took 1.02759912s for pod "coredns-7db6d8ff4d-pj8wd" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.869936 1644261 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.880443 1644261 pod_ready.go:92] pod "etcd-addons-747503" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.880468 1644261 pod_ready.go:81] duration metric: took 10.523706ms for pod "etcd-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.880483 1644261 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.893210 1644261 pod_ready.go:92] pod "kube-apiserver-addons-747503" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.893237 1644261 pod_ready.go:81] duration metric: took 12.745711ms for pod "kube-apiserver-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.893253 1644261 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.902837 1644261 pod_ready.go:92] pod "kube-controller-manager-addons-747503" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.902861 1644261 pod_ready.go:81] duration metric: took 9.600699ms for pod "kube-controller-manager-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.902876 1644261 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cmk9r" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.984300 1644261 pod_ready.go:92] pod "kube-proxy-cmk9r" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:51.984328 1644261 pod_ready.go:81] duration metric: took 81.441699ms for pod "kube-proxy-cmk9r" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:51.984340 1644261 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:52.112853 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:52.203480 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:52.206627 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:52.298995 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:52.385428 1644261 pod_ready.go:92] pod "kube-scheduler-addons-747503" in "kube-system" namespace has status "Ready":"True"
	I0420 00:47:52.385502 1644261 pod_ready.go:81] duration metric: took 401.135821ms for pod "kube-scheduler-addons-747503" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:52.385569 1644261 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace to be "Ready" ...
	I0420 00:47:52.612694 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:52.702190 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:52.747764 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:52.816322 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:53.112453 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:53.204654 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:53.207621 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:53.300011 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:53.611628 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:53.705044 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:53.707972 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:53.798494 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:54.114108 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:54.205753 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:54.208644 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:54.299729 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:54.393624 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:47:54.614619 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:54.705416 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:54.734978 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:54.802347 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:55.114619 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:55.207471 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:55.207745 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:55.298943 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:55.613443 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:55.703721 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:55.711000 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:55.800030 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:56.112264 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:56.202387 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:56.205588 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:56.298472 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:56.611287 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:56.702954 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:56.706206 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:56.806862 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:56.892905 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:47:57.111783 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:57.202483 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:57.206930 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:57.298660 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:57.614099 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:57.715434 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:57.716689 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:57.799161 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:58.111940 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:58.202592 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:58.205903 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:58.299398 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:58.614025 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:58.703746 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:58.710610 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:58.800586 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:59.113393 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:59.210841 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:59.213028 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:59.300372 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:47:59.395893 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:47:59.624560 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:47:59.702174 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:47:59.706175 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:47:59.798856 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:00.144163 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:00.225477 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:00.229320 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:48:00.331039 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:00.612190 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:00.703620 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:00.707231 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:48:00.800642 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:01.114451 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:01.205638 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:01.213681 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:48:01.299675 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:01.612691 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:01.703427 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:01.705034 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0420 00:48:01.798680 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:01.892315 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:02.114119 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:02.204802 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:02.225935 1644261 kapi.go:107] duration metric: took 40.025228032s to wait for kubernetes.io/minikube-addons=registry ...
	I0420 00:48:02.312863 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:02.627694 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:02.703161 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:02.798728 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:03.113524 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:03.202842 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:03.299523 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:03.613428 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:03.703775 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:03.800788 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:03.895136 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:04.113370 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:04.202943 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:04.299410 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:04.613215 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:04.702731 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:04.799550 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:05.113042 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:05.202585 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:05.299047 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:05.614002 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:05.702680 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:05.802558 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:05.895296 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:06.114675 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:06.204549 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:06.302473 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:06.614112 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:06.703570 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:06.799260 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:07.113316 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:07.202565 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:07.298863 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:07.612018 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:07.703264 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:07.798648 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:08.112270 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:08.203042 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:08.299153 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:08.393303 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:08.613346 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:08.702765 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:08.799658 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:09.112127 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:09.203847 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:09.299447 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:09.614216 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:09.703657 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:09.800601 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:10.118707 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:10.202184 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:10.298516 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:10.393387 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:10.613200 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:10.703138 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:10.800550 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:11.137314 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:11.202576 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:11.299140 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:11.613141 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:11.703548 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:11.805116 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:12.111561 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:12.202457 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:12.299341 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:12.612160 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:12.702500 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:12.799184 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:12.892518 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:13.111952 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:13.202640 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:13.299417 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:13.612612 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:13.703214 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:13.799562 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:14.112092 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:14.202789 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:14.298940 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:14.612071 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:14.701930 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:14.798636 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:15.112017 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:15.202850 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:15.300071 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:15.396380 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:15.642371 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:15.703162 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:15.799216 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:16.113062 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:16.202577 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:16.298955 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:16.612867 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:16.702379 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:16.798754 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:17.111096 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:17.202268 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:17.298651 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:17.611502 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:17.702326 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:17.799117 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:17.891723 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:18.112154 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:18.203753 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:18.298844 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:18.611817 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:18.701804 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:18.799969 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:19.112834 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:19.202549 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:19.299356 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:19.612472 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:19.702362 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:19.800164 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:19.894073 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:20.112337 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:20.205154 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:20.299624 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:20.612270 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:20.702951 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:20.798601 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:21.111917 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:21.202373 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:21.299686 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:21.612723 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:21.702148 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:21.798629 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:22.112515 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:22.203367 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:22.302148 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:22.392640 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:22.612660 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:22.702810 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:22.798786 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:23.111919 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:23.201812 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:23.299108 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:23.611637 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:23.701775 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:23.799304 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:24.131787 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:24.255684 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:24.316635 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:24.418028 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:24.617598 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:24.702006 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:24.799016 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:25.112839 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:25.201904 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:25.298574 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:25.611863 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:25.702151 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:25.798626 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:26.112621 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:26.202033 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:26.298771 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:26.621173 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:26.704880 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:26.799455 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:26.892011 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:27.111595 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:27.202612 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:27.298946 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:27.612741 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:27.703062 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:27.799262 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:28.111989 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:28.205133 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:28.300116 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:28.617767 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:28.702926 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:28.798906 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:28.895433 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:29.115635 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:29.202330 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:29.305517 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:29.612221 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:29.715115 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:29.800816 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:30.113986 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:30.204176 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:30.300436 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:30.611466 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:30.702492 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:30.799210 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:31.132012 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:31.204558 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:31.299795 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:31.394405 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:31.611911 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:31.702249 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:31.798674 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:32.115252 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:32.204156 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:32.298833 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:32.612034 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:32.702349 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:32.798789 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:33.112130 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:33.202582 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:33.299170 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:33.612774 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:33.702834 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:33.800234 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:33.892097 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:34.112008 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:34.202297 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:34.298717 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:34.619328 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:34.703435 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:34.806019 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:35.120414 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:35.205464 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:35.300355 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:35.613650 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:35.703059 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:35.799691 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:35.894200 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:36.113606 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:36.202374 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:36.299444 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:36.612786 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:36.702747 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:36.799741 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:37.112124 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:37.202762 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:37.301973 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:37.612106 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:37.702390 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:37.820773 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:37.896709 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:38.112525 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:38.203055 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:38.298160 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:38.614559 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:38.702186 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:38.798806 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:39.113206 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:39.203142 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:39.302909 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:39.621741 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:39.702067 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:39.799389 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:40.113336 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:40.203042 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:40.298723 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:40.395135 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:40.612345 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:40.702488 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:40.799104 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:41.122104 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:41.202448 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:41.300486 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:41.612243 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:41.703549 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:41.799237 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:42.111985 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:42.203111 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:42.302465 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:42.612639 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:42.703714 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:42.799406 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:42.892695 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:43.112179 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:43.203272 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:43.298925 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:43.612258 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:43.702705 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:43.799390 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:44.115774 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:44.202051 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:44.298314 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:44.611987 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:44.702124 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:44.798541 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:45.112791 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:45.204493 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:45.299729 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:45.393729 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:45.612789 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:45.702486 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:45.799560 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:46.119540 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0420 00:48:46.202732 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:46.299293 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:46.611893 1644261 kapi.go:107] duration metric: took 1m24.005858121s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0420 00:48:46.702042 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:46.798447 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:47.202393 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:47.298773 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:47.701765 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:47.799351 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:47.892278 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:48.202626 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:48.299390 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:48.702100 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:48.799332 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:49.201889 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:49.299047 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:49.702415 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:49.799051 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:49.892790 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:50.202697 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:50.299292 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:50.702478 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:50.798784 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:51.202229 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:51.298707 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:51.703133 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:51.798709 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:52.202174 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:52.298480 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:52.391893 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:52.702258 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:52.798434 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:53.202557 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:53.298973 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:53.702469 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:53.798795 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:54.201914 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:54.299019 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:54.392210 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:54.702208 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:54.798675 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:55.201889 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:55.299247 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:55.703349 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:55.798800 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:56.201804 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:56.299211 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:56.398126 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:56.703544 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:56.801082 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:57.203554 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:57.300723 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:57.701744 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:57.799038 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:58.202294 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:58.298796 1644261 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0420 00:48:58.408975 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:48:58.703585 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:58.800895 1644261 kapi.go:107] duration metric: took 1m32.005859357s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0420 00:48:58.803430 1644261 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-747503 cluster.
	I0420 00:48:58.805977 1644261 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0420 00:48:58.808774 1644261 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0420 00:48:59.214348 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:48:59.702282 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:00.204506 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:00.703046 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:00.895496 1644261 pod_ready.go:102] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"False"
	I0420 00:49:01.210517 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:01.401710 1644261 pod_ready.go:92] pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace has status "Ready":"True"
	I0420 00:49:01.401740 1644261 pod_ready.go:81] duration metric: took 1m9.016144355s for pod "metrics-server-c59844bb4-jmtz4" in "kube-system" namespace to be "Ready" ...
	I0420 00:49:01.401759 1644261 pod_ready.go:38] duration metric: took 1m10.602112322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 00:49:01.401774 1644261 api_server.go:52] waiting for apiserver process to appear ...
	I0420 00:49:01.401809 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:49:01.401878 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:49:01.476056 1644261 cri.go:89] found id: "d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:01.476082 1644261 cri.go:89] found id: ""
	I0420 00:49:01.476091 1644261 logs.go:276] 1 containers: [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053]
	I0420 00:49:01.476157 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.482452 1644261 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:49:01.482549 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:49:01.545143 1644261 cri.go:89] found id: "dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:01.545169 1644261 cri.go:89] found id: ""
	I0420 00:49:01.545179 1644261 logs.go:276] 1 containers: [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330]
	I0420 00:49:01.545245 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.550669 1644261 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:49:01.550748 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:49:01.613640 1644261 cri.go:89] found id: "dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:01.613666 1644261 cri.go:89] found id: ""
	I0420 00:49:01.613678 1644261 logs.go:276] 1 containers: [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df]
	I0420 00:49:01.613749 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.619858 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:49:01.619944 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:49:01.677562 1644261 cri.go:89] found id: "efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:01.677589 1644261 cri.go:89] found id: ""
	I0420 00:49:01.677600 1644261 logs.go:276] 1 containers: [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929]
	I0420 00:49:01.677672 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.682732 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:49:01.682885 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:49:01.704238 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:01.772321 1644261 cri.go:89] found id: "8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:01.772392 1644261 cri.go:89] found id: ""
	I0420 00:49:01.772427 1644261 logs.go:276] 1 containers: [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20]
	I0420 00:49:01.772523 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.776830 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:49:01.776962 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:49:01.856325 1644261 cri.go:89] found id: "120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:01.856401 1644261 cri.go:89] found id: ""
	I0420 00:49:01.856433 1644261 logs.go:276] 1 containers: [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e]
	I0420 00:49:01.856549 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.861620 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:49:01.861776 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:49:01.928733 1644261 cri.go:89] found id: "b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:01.928808 1644261 cri.go:89] found id: ""
	I0420 00:49:01.928845 1644261 logs.go:276] 1 containers: [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52]
	I0420 00:49:01.928943 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:01.933261 1644261 logs.go:123] Gathering logs for dmesg ...
	I0420 00:49:01.933340 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:49:01.955010 1644261 logs.go:123] Gathering logs for kube-apiserver [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053] ...
	I0420 00:49:01.955091 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:02.037301 1644261 logs.go:123] Gathering logs for etcd [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330] ...
	I0420 00:49:02.037382 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:02.098944 1644261 logs.go:123] Gathering logs for kube-controller-manager [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e] ...
	I0420 00:49:02.098977 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:02.203698 1644261 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:49:02.203731 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:49:02.208871 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:02.335611 1644261 logs.go:123] Gathering logs for kubelet ...
	I0420 00:49:02.335713 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0420 00:49:02.411512 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.801309    1518 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.411792 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.801356    1518 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.412589 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815347    1518 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.412756 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815367    1518 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.413022 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.413229 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.413874 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820274    1518 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414080 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820315    1518 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414271 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820622    1518 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414479 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820646    1518 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414667 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.821047    1518 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.414879 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.415678 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.416354 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.416560 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:02.416767 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:02.481101 1644261 logs.go:123] Gathering logs for coredns [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df] ...
	I0420 00:49:02.481147 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:02.545331 1644261 logs.go:123] Gathering logs for kube-scheduler [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929] ...
	I0420 00:49:02.545360 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:02.653396 1644261 logs.go:123] Gathering logs for kube-proxy [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20] ...
	I0420 00:49:02.653434 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:02.703574 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:02.718396 1644261 logs.go:123] Gathering logs for kindnet [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52] ...
	I0420 00:49:02.718473 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:02.784614 1644261 logs.go:123] Gathering logs for container status ...
	I0420 00:49:02.784642 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:49:02.862815 1644261 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:49:02.862918 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:49:03.154905 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:03.154978 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0420 00:49:03.155071 1644261 out.go:239] X Problems detected in kubelet:
	W0420 00:49:03.155116 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:03.155296 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:03.155332 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:03.155379 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:03.155413 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:03.155457 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:03.155482 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:49:03.203035 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:03.702362 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:04.203239 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:04.711009 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:05.203956 1644261 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0420 00:49:05.702494 1644261 kapi.go:107] duration metric: took 1m43.506280483s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0420 00:49:05.705025 1644261 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0420 00:49:05.707241 1644261 addons.go:505] duration metric: took 1m49.525505308s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0420 00:49:13.156219 1644261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 00:49:13.170608 1644261 api_server.go:72] duration metric: took 1m56.989122484s to wait for apiserver process to appear ...
	I0420 00:49:13.170636 1644261 api_server.go:88] waiting for apiserver healthz status ...
	I0420 00:49:13.170677 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:49:13.170743 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:49:13.215140 1644261 cri.go:89] found id: "d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:13.215162 1644261 cri.go:89] found id: ""
	I0420 00:49:13.215171 1644261 logs.go:276] 1 containers: [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053]
	I0420 00:49:13.215236 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.218892 1644261 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:49:13.218971 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:49:13.263654 1644261 cri.go:89] found id: "dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:13.263682 1644261 cri.go:89] found id: ""
	I0420 00:49:13.263691 1644261 logs.go:276] 1 containers: [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330]
	I0420 00:49:13.263764 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.267679 1644261 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:49:13.267768 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:49:13.309684 1644261 cri.go:89] found id: "dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:13.309708 1644261 cri.go:89] found id: ""
	I0420 00:49:13.309720 1644261 logs.go:276] 1 containers: [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df]
	I0420 00:49:13.309776 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.313423 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:49:13.313507 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:49:13.351369 1644261 cri.go:89] found id: "efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:13.351394 1644261 cri.go:89] found id: ""
	I0420 00:49:13.351403 1644261 logs.go:276] 1 containers: [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929]
	I0420 00:49:13.351459 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.358220 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:49:13.358301 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:49:13.402876 1644261 cri.go:89] found id: "8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:13.402901 1644261 cri.go:89] found id: ""
	I0420 00:49:13.402909 1644261 logs.go:276] 1 containers: [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20]
	I0420 00:49:13.402967 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.406557 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:49:13.406631 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:49:13.446459 1644261 cri.go:89] found id: "120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:13.446528 1644261 cri.go:89] found id: ""
	I0420 00:49:13.446542 1644261 logs.go:276] 1 containers: [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e]
	I0420 00:49:13.446602 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.450261 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:49:13.450351 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:49:13.490186 1644261 cri.go:89] found id: "b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:13.490224 1644261 cri.go:89] found id: ""
	I0420 00:49:13.490234 1644261 logs.go:276] 1 containers: [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52]
	I0420 00:49:13.490331 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:13.493880 1644261 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:49:13.493909 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:49:13.625695 1644261 logs.go:123] Gathering logs for kube-apiserver [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053] ...
	I0420 00:49:13.625770 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:13.692424 1644261 logs.go:123] Gathering logs for coredns [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df] ...
	I0420 00:49:13.692460 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:13.739447 1644261 logs.go:123] Gathering logs for kube-scheduler [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929] ...
	I0420 00:49:13.739479 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:13.783910 1644261 logs.go:123] Gathering logs for container status ...
	I0420 00:49:13.783946 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:49:13.846042 1644261 logs.go:123] Gathering logs for kubelet ...
	I0420 00:49:13.846079 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0420 00:49:13.886398 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.801309    1518 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.886620 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.801356    1518 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.887405 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815347    1518 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.887575 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815367    1518 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.887758 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.887960 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.888582 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820274    1518 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.888784 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820315    1518 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.888970 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820622    1518 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.889177 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820646    1518 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.889365 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.821047    1518 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.889580 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.890396 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.890599 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.890791 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:13.890996 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:13.938460 1644261 logs.go:123] Gathering logs for dmesg ...
	I0420 00:49:13.938494 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:49:13.965511 1644261 logs.go:123] Gathering logs for kube-controller-manager [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e] ...
	I0420 00:49:13.965647 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:14.058549 1644261 logs.go:123] Gathering logs for kindnet [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52] ...
	I0420 00:49:14.058589 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:14.107649 1644261 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:49:14.107678 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:49:14.200718 1644261 logs.go:123] Gathering logs for etcd [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330] ...
	I0420 00:49:14.200757 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:14.253776 1644261 logs.go:123] Gathering logs for kube-proxy [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20] ...
	I0420 00:49:14.253816 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:14.295736 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:14.295762 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0420 00:49:14.295814 1644261 out.go:239] X Problems detected in kubelet:
	W0420 00:49:14.295828 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:14.295836 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:14.295844 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:14.295852 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:14.295857 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:14.295871 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:14.295877 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:49:24.297174 1644261 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 00:49:24.304890 1644261 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0420 00:49:24.305901 1644261 api_server.go:141] control plane version: v1.30.0
	I0420 00:49:24.305926 1644261 api_server.go:131] duration metric: took 11.135283023s to wait for apiserver health ...
	I0420 00:49:24.305935 1644261 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 00:49:24.305957 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 00:49:24.306023 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 00:49:24.342719 1644261 cri.go:89] found id: "d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:24.342741 1644261 cri.go:89] found id: ""
	I0420 00:49:24.342749 1644261 logs.go:276] 1 containers: [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053]
	I0420 00:49:24.342812 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.346322 1644261 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 00:49:24.346394 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 00:49:24.390679 1644261 cri.go:89] found id: "dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:24.390702 1644261 cri.go:89] found id: ""
	I0420 00:49:24.390710 1644261 logs.go:276] 1 containers: [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330]
	I0420 00:49:24.390791 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.394567 1644261 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 00:49:24.394662 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 00:49:24.442284 1644261 cri.go:89] found id: "dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:24.442307 1644261 cri.go:89] found id: ""
	I0420 00:49:24.442315 1644261 logs.go:276] 1 containers: [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df]
	I0420 00:49:24.442382 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.446024 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 00:49:24.446108 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 00:49:24.484224 1644261 cri.go:89] found id: "efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:24.484248 1644261 cri.go:89] found id: ""
	I0420 00:49:24.484260 1644261 logs.go:276] 1 containers: [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929]
	I0420 00:49:24.484317 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.488065 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 00:49:24.488140 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 00:49:24.561054 1644261 cri.go:89] found id: "8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:24.561075 1644261 cri.go:89] found id: ""
	I0420 00:49:24.561085 1644261 logs.go:276] 1 containers: [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20]
	I0420 00:49:24.561141 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.564741 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 00:49:24.564860 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 00:49:24.605384 1644261 cri.go:89] found id: "120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:24.605444 1644261 cri.go:89] found id: ""
	I0420 00:49:24.605466 1644261 logs.go:276] 1 containers: [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e]
	I0420 00:49:24.605568 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.609475 1644261 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 00:49:24.610101 1644261 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 00:49:24.647409 1644261 cri.go:89] found id: "b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:24.647432 1644261 cri.go:89] found id: ""
	I0420 00:49:24.647441 1644261 logs.go:276] 1 containers: [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52]
	I0420 00:49:24.647516 1644261 ssh_runner.go:195] Run: which crictl
	I0420 00:49:24.650908 1644261 logs.go:123] Gathering logs for kubelet ...
	I0420 00:49:24.650933 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0420 00:49:24.687053 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.801309    1518 reflector.go:547] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.687296 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.801356    1518 reflector.go:150] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.688077 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815347    1518 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.688245 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.815367    1518 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.688430 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.688630 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.815395    1518 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.689257 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820274    1518 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.689459 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820315    1518 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.689656 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.820622    1518 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.689866 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.820646    1518 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.690051 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.821047    1518 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.690258 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.691080 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.691285 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.691472 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:24.691679 1644261 logs.go:138] Found kubelet problem: Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:24.740157 1644261 logs.go:123] Gathering logs for dmesg ...
	I0420 00:49:24.740187 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 00:49:24.760602 1644261 logs.go:123] Gathering logs for kube-apiserver [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053] ...
	I0420 00:49:24.760632 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053"
	I0420 00:49:24.828968 1644261 logs.go:123] Gathering logs for etcd [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330] ...
	I0420 00:49:24.829007 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330"
	I0420 00:49:24.876633 1644261 logs.go:123] Gathering logs for kindnet [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52] ...
	I0420 00:49:24.876671 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52"
	I0420 00:49:24.922399 1644261 logs.go:123] Gathering logs for container status ...
	I0420 00:49:24.922431 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 00:49:24.969473 1644261 logs.go:123] Gathering logs for describe nodes ...
	I0420 00:49:24.969505 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 00:49:25.149062 1644261 logs.go:123] Gathering logs for coredns [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df] ...
	I0420 00:49:25.149098 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df"
	I0420 00:49:25.194458 1644261 logs.go:123] Gathering logs for kube-scheduler [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929] ...
	I0420 00:49:25.194489 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929"
	I0420 00:49:25.247513 1644261 logs.go:123] Gathering logs for kube-proxy [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20] ...
	I0420 00:49:25.247547 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20"
	I0420 00:49:25.283929 1644261 logs.go:123] Gathering logs for kube-controller-manager [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e] ...
	I0420 00:49:25.283956 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e"
	I0420 00:49:25.350599 1644261 logs.go:123] Gathering logs for CRI-O ...
	I0420 00:49:25.350633 1644261 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 00:49:25.466073 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:25.466105 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0420 00:49:25.466176 1644261 out.go:239] X Problems detected in kubelet:
	W0420 00:49:25.466192 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.821073    1518 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-747503' and this object
	W0420 00:49:25.466205 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827880    1518 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:25.466234 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.827916    1518 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-747503" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-747503' and this object
	W0420 00:49:25.466254 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: W0420 00:47:50.827995    1518 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	W0420 00:49:25.466269 1644261 out.go:239]   Apr 20 00:47:50 addons-747503 kubelet[1518]: E0420 00:47:50.828009    1518 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-747503" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-747503' and this object
	I0420 00:49:25.466276 1644261 out.go:304] Setting ErrFile to fd 2...
	I0420 00:49:25.466287 1644261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:49:35.482334 1644261 system_pods.go:59] 18 kube-system pods found
	I0420 00:49:35.482377 1644261 system_pods.go:61] "coredns-7db6d8ff4d-pj8wd" [ce9c9144-65d1-45f2-a6e0-65ac4c220237] Running
	I0420 00:49:35.482384 1644261 system_pods.go:61] "csi-hostpath-attacher-0" [1407d955-83ec-4b1d-ac07-d55e593f975f] Running
	I0420 00:49:35.482389 1644261 system_pods.go:61] "csi-hostpath-resizer-0" [023884e7-abc6-4359-95ba-ee8031b2db76] Running
	I0420 00:49:35.482394 1644261 system_pods.go:61] "csi-hostpathplugin-z7j5n" [b938be04-8aac-427e-a62d-e0d6ecea4fe9] Running
	I0420 00:49:35.482399 1644261 system_pods.go:61] "etcd-addons-747503" [707cce58-27c7-483a-9f12-80d354c6e443] Running
	I0420 00:49:35.482402 1644261 system_pods.go:61] "kindnet-x7szp" [910dbd2a-9863-4585-8a5d-98c1bb4817e2] Running
	I0420 00:49:35.482407 1644261 system_pods.go:61] "kube-apiserver-addons-747503" [81db4265-6e75-41b4-85b6-c7e09e1979a7] Running
	I0420 00:49:35.482411 1644261 system_pods.go:61] "kube-controller-manager-addons-747503" [f4cfdf92-3a76-49c4-b1f6-3bc7cf34cd49] Running
	I0420 00:49:35.482420 1644261 system_pods.go:61] "kube-ingress-dns-minikube" [ec712066-7b44-45dc-a961-0f7688a75714] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0420 00:49:35.482431 1644261 system_pods.go:61] "kube-proxy-cmk9r" [13976009-573c-4b43-8062-07d9a92cb809] Running
	I0420 00:49:35.482441 1644261 system_pods.go:61] "kube-scheduler-addons-747503" [4c4ccef8-4e11-425f-9dc6-178584aa294d] Running
	I0420 00:49:35.482445 1644261 system_pods.go:61] "metrics-server-c59844bb4-jmtz4" [582654f0-7046-465f-b015-d889d5397c3c] Running
	I0420 00:49:35.482458 1644261 system_pods.go:61] "nvidia-device-plugin-daemonset-8wcvh" [1dc1e685-c035-4a95-99c7-d40ef680694c] Running
	I0420 00:49:35.482462 1644261 system_pods.go:61] "registry-proxy-5c8mf" [78326941-b968-43a4-865c-3f7c843b92c7] Running
	I0420 00:49:35.482466 1644261 system_pods.go:61] "registry-sx6fv" [c3fda03d-8cd2-4cff-9835-e17c079b7e05] Running
	I0420 00:49:35.482470 1644261 system_pods.go:61] "snapshot-controller-745499f584-7chnh" [1d82f222-8775-4214-b579-247919a249be] Running
	I0420 00:49:35.482474 1644261 system_pods.go:61] "snapshot-controller-745499f584-nk457" [a90bbeca-e4e7-4d3e-9eda-bf44e5d15f2c] Running
	I0420 00:49:35.482478 1644261 system_pods.go:61] "storage-provisioner" [c64f875a-fc82-45a9-acce-a3f649735d47] Running
	I0420 00:49:35.482493 1644261 system_pods.go:74] duration metric: took 11.176551903s to wait for pod list to return data ...
	I0420 00:49:35.482501 1644261 default_sa.go:34] waiting for default service account to be created ...
	I0420 00:49:35.485056 1644261 default_sa.go:45] found service account: "default"
	I0420 00:49:35.485086 1644261 default_sa.go:55] duration metric: took 2.576218ms for default service account to be created ...
	I0420 00:49:35.485096 1644261 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 00:49:35.495868 1644261 system_pods.go:86] 18 kube-system pods found
	I0420 00:49:35.495904 1644261 system_pods.go:89] "coredns-7db6d8ff4d-pj8wd" [ce9c9144-65d1-45f2-a6e0-65ac4c220237] Running
	I0420 00:49:35.495912 1644261 system_pods.go:89] "csi-hostpath-attacher-0" [1407d955-83ec-4b1d-ac07-d55e593f975f] Running
	I0420 00:49:35.495918 1644261 system_pods.go:89] "csi-hostpath-resizer-0" [023884e7-abc6-4359-95ba-ee8031b2db76] Running
	I0420 00:49:35.495922 1644261 system_pods.go:89] "csi-hostpathplugin-z7j5n" [b938be04-8aac-427e-a62d-e0d6ecea4fe9] Running
	I0420 00:49:35.495926 1644261 system_pods.go:89] "etcd-addons-747503" [707cce58-27c7-483a-9f12-80d354c6e443] Running
	I0420 00:49:35.495931 1644261 system_pods.go:89] "kindnet-x7szp" [910dbd2a-9863-4585-8a5d-98c1bb4817e2] Running
	I0420 00:49:35.495936 1644261 system_pods.go:89] "kube-apiserver-addons-747503" [81db4265-6e75-41b4-85b6-c7e09e1979a7] Running
	I0420 00:49:35.495940 1644261 system_pods.go:89] "kube-controller-manager-addons-747503" [f4cfdf92-3a76-49c4-b1f6-3bc7cf34cd49] Running
	I0420 00:49:35.495951 1644261 system_pods.go:89] "kube-ingress-dns-minikube" [ec712066-7b44-45dc-a961-0f7688a75714] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0420 00:49:35.495962 1644261 system_pods.go:89] "kube-proxy-cmk9r" [13976009-573c-4b43-8062-07d9a92cb809] Running
	I0420 00:49:35.495977 1644261 system_pods.go:89] "kube-scheduler-addons-747503" [4c4ccef8-4e11-425f-9dc6-178584aa294d] Running
	I0420 00:49:35.495981 1644261 system_pods.go:89] "metrics-server-c59844bb4-jmtz4" [582654f0-7046-465f-b015-d889d5397c3c] Running
	I0420 00:49:35.495986 1644261 system_pods.go:89] "nvidia-device-plugin-daemonset-8wcvh" [1dc1e685-c035-4a95-99c7-d40ef680694c] Running
	I0420 00:49:35.495993 1644261 system_pods.go:89] "registry-proxy-5c8mf" [78326941-b968-43a4-865c-3f7c843b92c7] Running
	I0420 00:49:35.495999 1644261 system_pods.go:89] "registry-sx6fv" [c3fda03d-8cd2-4cff-9835-e17c079b7e05] Running
	I0420 00:49:35.496006 1644261 system_pods.go:89] "snapshot-controller-745499f584-7chnh" [1d82f222-8775-4214-b579-247919a249be] Running
	I0420 00:49:35.496011 1644261 system_pods.go:89] "snapshot-controller-745499f584-nk457" [a90bbeca-e4e7-4d3e-9eda-bf44e5d15f2c] Running
	I0420 00:49:35.496015 1644261 system_pods.go:89] "storage-provisioner" [c64f875a-fc82-45a9-acce-a3f649735d47] Running
	I0420 00:49:35.496023 1644261 system_pods.go:126] duration metric: took 10.920416ms to wait for k8s-apps to be running ...
	I0420 00:49:35.496034 1644261 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 00:49:35.496098 1644261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 00:49:35.510334 1644261 system_svc.go:56] duration metric: took 14.291022ms WaitForService to wait for kubelet
	I0420 00:49:35.510421 1644261 kubeadm.go:576] duration metric: took 2m19.328937561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 00:49:35.510458 1644261 node_conditions.go:102] verifying NodePressure condition ...
	I0420 00:49:35.513887 1644261 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0420 00:49:35.513920 1644261 node_conditions.go:123] node cpu capacity is 2
	I0420 00:49:35.513932 1644261 node_conditions.go:105] duration metric: took 3.453007ms to run NodePressure ...
	I0420 00:49:35.513944 1644261 start.go:240] waiting for startup goroutines ...
	I0420 00:49:35.513972 1644261 start.go:245] waiting for cluster config update ...
	I0420 00:49:35.514000 1644261 start.go:254] writing updated cluster config ...
	I0420 00:49:35.514532 1644261 ssh_runner.go:195] Run: rm -f paused
	I0420 00:49:35.939859 1644261 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 00:49:35.941995 1644261 out.go:177] * Done! kubectl is now configured to use "addons-747503" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 20 00:50:28 addons-747503 conmon[3776]: conmon 7aa4adc751b6d6c29502 <ninfo>: container 3787 exited with status 2
	Apr 20 00:50:28 addons-747503 conmon[3598]: conmon 08e64ff964add136757c <ninfo>: container 3609 exited with status 2
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.038686785Z" level=info msg="Stopped container 7aa4adc751b6d6c29502fd4163c593fb4bd687d4697b33f4e20d2e456c1f5d5d: kube-system/snapshot-controller-745499f584-nk457/volume-snapshot-controller" id=1969419e-acb7-48cf-b3e0-78f8695be209 name=/runtime.v1.RuntimeService/StopContainer
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.039242280Z" level=info msg="Stopping pod sandbox: 17bfe93fe8dc197e0229c3d548932b2d6aa531e8abcc1ccdae318220ed4712d4" id=127b6986-2c9b-415e-8f93-25350ba268c4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.039493054Z" level=info msg="Got pod network &{Name:snapshot-controller-745499f584-nk457 Namespace:kube-system ID:17bfe93fe8dc197e0229c3d548932b2d6aa531e8abcc1ccdae318220ed4712d4 UID:a90bbeca-e4e7-4d3e-9eda-bf44e5d15f2c NetNS:/var/run/netns/8ed4a6d6-4838-41e1-871f-5191dcd81b5e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.039667498Z" level=info msg="Deleting pod kube-system_snapshot-controller-745499f584-nk457 from CNI network \"kindnet\" (type=ptp)"
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.040313050Z" level=info msg="Stopped container 08e64ff964add136757c046c09ec66fa5fc7e948a28c9af513e22a628bf00501: kube-system/snapshot-controller-745499f584-7chnh/volume-snapshot-controller" id=9d7be035-80fa-4e7b-a989-5838a23ff5a1 name=/runtime.v1.RuntimeService/StopContainer
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.040825790Z" level=info msg="Stopping pod sandbox: f4deae94c5afbdb1d7366218a2fbe5bdef857918627fd404bfc86021b4dcc4c0" id=eb647710-bc87-4b9d-89e0-b6e232d21afc name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.041051916Z" level=info msg="Got pod network &{Name:snapshot-controller-745499f584-7chnh Namespace:kube-system ID:f4deae94c5afbdb1d7366218a2fbe5bdef857918627fd404bfc86021b4dcc4c0 UID:1d82f222-8775-4214-b579-247919a249be NetNS:/var/run/netns/8e9c432f-60c3-4b41-a8f9-004768d852b2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.041210065Z" level=info msg="Deleting pod kube-system_snapshot-controller-745499f584-7chnh from CNI network \"kindnet\" (type=ptp)"
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.063731766Z" level=info msg="Stopped pod sandbox: 17bfe93fe8dc197e0229c3d548932b2d6aa531e8abcc1ccdae318220ed4712d4" id=127b6986-2c9b-415e-8f93-25350ba268c4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.076007327Z" level=info msg="Stopped pod sandbox: f4deae94c5afbdb1d7366218a2fbe5bdef857918627fd404bfc86021b4dcc4c0" id=eb647710-bc87-4b9d-89e0-b6e232d21afc name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.791124327Z" level=info msg="Removing container: 7aa4adc751b6d6c29502fd4163c593fb4bd687d4697b33f4e20d2e456c1f5d5d" id=9f1f0843-1589-4f7f-9576-7e4f750b3ace name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.814700125Z" level=info msg="Removed container 7aa4adc751b6d6c29502fd4163c593fb4bd687d4697b33f4e20d2e456c1f5d5d: kube-system/snapshot-controller-745499f584-nk457/volume-snapshot-controller" id=9f1f0843-1589-4f7f-9576-7e4f750b3ace name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.817468698Z" level=info msg="Removing container: 08e64ff964add136757c046c09ec66fa5fc7e948a28c9af513e22a628bf00501" id=4f4c2682-ecf3-4451-92cf-582be80eca2b name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 20 00:50:29 addons-747503 crio[920]: time="2024-04-20 00:50:29.840314919Z" level=info msg="Removed container 08e64ff964add136757c046c09ec66fa5fc7e948a28c9af513e22a628bf00501: kube-system/snapshot-controller-745499f584-7chnh/volume-snapshot-controller" id=4f4c2682-ecf3-4451-92cf-582be80eca2b name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 20 00:50:35 addons-747503 crio[920]: time="2024-04-20 00:50:35.508600360Z" level=info msg="Stopping container: e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20 (timeout: 30s)" id=5f7f0db5-c663-401d-b23c-a8af66e0a548 name=/runtime.v1.RuntimeService/StopContainer
	Apr 20 00:50:35 addons-747503 conmon[4558]: conmon e48bd1e0eee5484a24d4 <ninfo>: container 4571 exited with status 2
	Apr 20 00:50:35 addons-747503 crio[920]: time="2024-04-20 00:50:35.668424916Z" level=info msg="Stopped container e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20: default/cloud-spanner-emulator-8677549d7-7lmgv/cloud-spanner-emulator" id=5f7f0db5-c663-401d-b23c-a8af66e0a548 name=/runtime.v1.RuntimeService/StopContainer
	Apr 20 00:50:35 addons-747503 crio[920]: time="2024-04-20 00:50:35.669122084Z" level=info msg="Stopping pod sandbox: 8a9af8274bc75e1ebaca0f966d197129d52378a75580690bb854be7586ff1662" id=2839fd31-2208-417b-a56b-93f7c18e4fd5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 20 00:50:35 addons-747503 crio[920]: time="2024-04-20 00:50:35.669338627Z" level=info msg="Got pod network &{Name:cloud-spanner-emulator-8677549d7-7lmgv Namespace:default ID:8a9af8274bc75e1ebaca0f966d197129d52378a75580690bb854be7586ff1662 UID:48f65c56-a870-4d8d-b6d5-9e8070d92042 NetNS:/var/run/netns/ec8f804e-d0a9-4952-840c-be20290d23ba Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 20 00:50:35 addons-747503 crio[920]: time="2024-04-20 00:50:35.669468749Z" level=info msg="Deleting pod default_cloud-spanner-emulator-8677549d7-7lmgv from CNI network \"kindnet\" (type=ptp)"
	Apr 20 00:50:35 addons-747503 crio[920]: time="2024-04-20 00:50:35.706090087Z" level=info msg="Stopped pod sandbox: 8a9af8274bc75e1ebaca0f966d197129d52378a75580690bb854be7586ff1662" id=2839fd31-2208-417b-a56b-93f7c18e4fd5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 20 00:50:35 addons-747503 crio[920]: time="2024-04-20 00:50:35.825083093Z" level=info msg="Removing container: e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20" id=dcf1f22c-7ece-4329-953a-3a3700ade2eb name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 20 00:50:35 addons-747503 crio[920]: time="2024-04-20 00:50:35.851772045Z" level=info msg="Removed container e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20: default/cloud-spanner-emulator-8677549d7-7lmgv/cloud-spanner-emulator" id=dcf1f22c-7ece-4329-953a-3a3700ade2eb name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	74bd17412be7b       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                             22 seconds ago       Exited              helper-pod                0                   80f30717e73dd       helper-pod-delete-pvc-b29b3cd7-c850-4a4e-b0ba-8a8cc403a41d
	d01f863bd45ea       docker.io/library/busybox@sha256:15b3852228f2a4251fb997ce32a52204b76babcaae22df16cac5e217d95a5c07                            25 seconds ago       Exited              busybox                   0                   c87368dcc9c45       test-local-path
	910f96f439cdc       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                            29 seconds ago       Exited              helper-pod                0                   14ddaffed20dc       helper-pod-create-pvc-b29b3cd7-c850-4a4e-b0ba-8a8cc403a41d
	97e581274b9ea       1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a                                                             About a minute ago   Exited              minikube-ingress-dns      4                   ca5e8f9450c0d       kube-ingress-dns-minikube
	19dc0b4cf1917       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:a0c58a03bd7b31512e187f86e72a18feb6fb938e744a713efcfe5ef5418aa1cd            About a minute ago   Exited              gadget                    4                   91e2dd6b49779       gadget-j48lz
	350ed3938381c       registry.k8s.io/ingress-nginx/controller@sha256:2e53c57c81ebad0263e98c98b66f0151217ce417eeeaacf911e62cc3dbae27e4             About a minute ago   Running             controller                0                   a4bb8a14e383e       ingress-nginx-controller-84df5799c-j6rf2
	065eeb203edc3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 About a minute ago   Running             gcp-auth                  0                   12318430bbe8d       gcp-auth-5db96cd9b4-dg9c5
	1b601878179f4       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             2 minutes ago        Running             local-path-provisioner    0                   142b0ad3ded07       local-path-provisioner-8d985888d-qfg97
	c7bd8cacd1c82       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              2 minutes ago        Running             yakd                      0                   b1192b4bbd9c1       yakd-dashboard-5ddbf7d777-q5cff
	ce2467374744f       1a024e390dd050d584b5c93bb30810e8be713157ab713b0d77a7af14dfe88c1e                                                             2 minutes ago        Exited              patch                     1                   df393a718fd03       ingress-nginx-admission-patch-zm4pk
	2bd8cbc349360       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0b1098ef00acee905f9736f98dd151af0a38d0fef0ccf9fb5ad189b20933e5f8   2 minutes ago        Exited              create                    0                   f2bef6044a1b7       ingress-nginx-admission-create-p788l
	d44171fb37303       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        2 minutes ago        Running             metrics-server            0                   48679652e7ffe       metrics-server-c59844bb4-jmtz4
	22c56a3e8a0fe       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             2 minutes ago        Running             storage-provisioner       0                   d7e961b6341a3       storage-provisioner
	dfc51e1c1bccd       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             2 minutes ago        Running             coredns                   0                   1d5e91a66a006       coredns-7db6d8ff4d-pj8wd
	b21e49c0bda54       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                                             3 minutes ago        Running             kindnet-cni               0                   48b6b802564de       kindnet-x7szp
	8504f24d60ff9       cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f                                                             3 minutes ago        Running             kube-proxy                0                   a5fb2119d00b2       kube-proxy-cmk9r
	efdbc1a5337c8       547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a                                                             3 minutes ago        Running             kube-scheduler            0                   db745aaf12fb3       kube-scheduler-addons-747503
	d7b31a1429803       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb                                                             3 minutes ago        Running             kube-apiserver            0                   e3358216037d1       kube-apiserver-addons-747503
	120c278a1bb92       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1                                                             3 minutes ago        Running             kube-controller-manager   0                   32129d92cb9e3       kube-controller-manager-addons-747503
	dc5579e3b8be4       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             3 minutes ago        Running             etcd                      0                   0793765290d5b       etcd-addons-747503
	
	
	==> coredns [dfc51e1c1bccdb2033408505e079965b15fce9eed3eea1c304d59990af7522df] <==
	[INFO] 10.244.0.5:59804 - 29428 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002639148s
	[INFO] 10.244.0.5:40945 - 12378 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000168988s
	[INFO] 10.244.0.5:40945 - 46676 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000468991s
	[INFO] 10.244.0.5:39492 - 51658 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000123721s
	[INFO] 10.244.0.5:39492 - 27593 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000094085s
	[INFO] 10.244.0.5:41175 - 2345 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075731s
	[INFO] 10.244.0.5:41175 - 27444 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085142s
	[INFO] 10.244.0.5:35059 - 28980 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000071489s
	[INFO] 10.244.0.5:35059 - 28465 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041845s
	[INFO] 10.244.0.5:57186 - 8632 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001461248s
	[INFO] 10.244.0.5:57186 - 12966 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002099948s
	[INFO] 10.244.0.5:52302 - 23869 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096144s
	[INFO] 10.244.0.5:52302 - 16959 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00007811s
	[INFO] 10.244.0.19:37971 - 64212 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000186489s
	[INFO] 10.244.0.19:48065 - 40668 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00014073s
	[INFO] 10.244.0.19:59916 - 27232 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145768s
	[INFO] 10.244.0.19:47686 - 43653 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000146195s
	[INFO] 10.244.0.19:52084 - 5957 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138145s
	[INFO] 10.244.0.19:51746 - 32102 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088588s
	[INFO] 10.244.0.19:35594 - 22629 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002313825s
	[INFO] 10.244.0.19:33891 - 5938 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001893957s
	[INFO] 10.244.0.19:46830 - 48614 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00070187s
	[INFO] 10.244.0.19:51336 - 23027 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000683745s
	[INFO] 10.244.0.21:37015 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000210127s
	[INFO] 10.244.0.21:53419 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141805s
	
	
	==> describe nodes <==
	Name:               addons-747503
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-747503
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=addons-747503
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T00_47_03_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-747503
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 00:46:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-747503
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 00:50:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 00:50:07 +0000   Sat, 20 Apr 2024 00:46:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 00:50:07 +0000   Sat, 20 Apr 2024 00:46:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 00:50:07 +0000   Sat, 20 Apr 2024 00:46:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 00:50:07 +0000   Sat, 20 Apr 2024 00:47:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-747503
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb345c96e51549588e3445f8f88cea8c
	  System UUID:                338aa8bd-646a-4cfc-b77a-f650366b6c8a
	  Boot ID:                    cdaae8f5-66dd-4dda-afdc-9b84bbb262c1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-j48lz                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m16s
	  gcp-auth                    gcp-auth-5db96cd9b4-dg9c5                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  ingress-nginx               ingress-nginx-controller-84df5799c-j6rf2    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         3m15s
	  kube-system                 coredns-7db6d8ff4d-pj8wd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m21s
	  kube-system                 etcd-addons-747503                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m35s
	  kube-system                 kindnet-x7szp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m21s
	  kube-system                 kube-apiserver-addons-747503                250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-controller-manager-addons-747503       200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                 kube-proxy-cmk9r                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  kube-system                 kube-scheduler-addons-747503                100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 metrics-server-c59844bb4-jmtz4              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m17s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m16s
	  local-path-storage          local-path-provisioner-8d985888d-qfg97      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m16s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-q5cff             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     3m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  100m (5%!)(MISSING)
	  memory             638Mi (8%!)(MISSING)   476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m42s (x8 over 3m42s)  kubelet          Node addons-747503 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s (x8 over 3m42s)  kubelet          Node addons-747503 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s (x8 over 3m42s)  kubelet          Node addons-747503 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m35s                  kubelet          Node addons-747503 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m35s                  kubelet          Node addons-747503 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m35s                  kubelet          Node addons-747503 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m22s                  node-controller  Node addons-747503 event: Registered Node addons-747503 in Controller
	  Normal  NodeReady                2m47s                  kubelet          Node addons-747503 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000807] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000960] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000be586629
	[  +0.001092] FS-Cache: N-key=[8] '15d8c90000000000'
	[  +0.002828] FS-Cache: Duplicate cookie detected
	[  +0.000717] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=00000000ead4e9ad{9p.inode} n=000000008f558ce4
	[  +0.001060] FS-Cache: O-key=[8] '15d8c90000000000'
	[  +0.000703] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000906] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000f46698ff
	[  +0.001011] FS-Cache: N-key=[8] '15d8c90000000000'
	[  +3.061970] FS-Cache: Duplicate cookie detected
	[  +0.000754] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001064] FS-Cache: O-cookie d=00000000ead4e9ad{9p.inode} n=00000000ea440894
	[  +0.001029] FS-Cache: O-key=[8] '14d8c90000000000'
	[  +0.000778] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001045] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000999f4db4
	[  +0.001563] FS-Cache: N-key=[8] '14d8c90000000000'
	[  +0.297624] FS-Cache: Duplicate cookie detected
	[  +0.000690] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000919] FS-Cache: O-cookie d=00000000ead4e9ad{9p.inode} n=00000000e5d6a697
	[  +0.001014] FS-Cache: O-key=[8] '1ad8c90000000000'
	[  +0.000691] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001016] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000be586629
	[  +0.001047] FS-Cache: N-key=[8] '1ad8c90000000000'
	[Apr20 00:19] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [dc5579e3b8be4adac4470925230e30fc4b51285937109a83bf34bdf48c609330] <==
	{"level":"info","ts":"2024-04-20T00:46:57.326346Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-20T00:46:57.327877Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-20T00:46:57.329336Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-20T00:46:57.333604Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:46:57.388525Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-20T00:46:57.388574Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-04-20T00:47:16.924241Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.94691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-cmk9r\" ","response":"range_response_count:1 size:4633"}
	{"level":"info","ts":"2024-04-20T00:47:16.924761Z","caller":"traceutil/trace.go:171","msg":"trace[1215544873] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-cmk9r; range_end:; response_count:1; response_revision:378; }","duration":"125.496234ms","start":"2024-04-20T00:47:16.799258Z","end":"2024-04-20T00:47:16.924754Z","steps":["trace[1215544873] 'agreement among raft nodes before linearized reading'  (duration: 124.890747ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:16.924393Z","caller":"traceutil/trace.go:171","msg":"trace[153633669] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"125.22924ms","start":"2024-04-20T00:47:16.799148Z","end":"2024-04-20T00:47:16.924378Z","steps":["trace[153633669] 'process raft request'  (duration: 124.905082ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:16.924546Z","caller":"traceutil/trace.go:171","msg":"trace[925466315] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"125.445036ms","start":"2024-04-20T00:47:16.799094Z","end":"2024-04-20T00:47:16.924539Z","steps":["trace[925466315] 'process raft request'  (duration: 124.874059ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:16.924681Z","caller":"traceutil/trace.go:171","msg":"trace[470147017] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"125.389169ms","start":"2024-04-20T00:47:16.799285Z","end":"2024-04-20T00:47:16.924674Z","steps":["trace[470147017] 'process raft request'  (duration: 124.793429ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-20T00:47:16.92472Z","caller":"traceutil/trace.go:171","msg":"trace[1120083311] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"125.391983ms","start":"2024-04-20T00:47:16.799322Z","end":"2024-04-20T00:47:16.924714Z","steps":["trace[1120083311] 'process raft request'  (duration: 124.790048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:18.67798Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.383847ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-x7szp\" ","response":"range_response_count:1 size:4910"}
	{"level":"info","ts":"2024-04-20T00:47:18.755921Z","caller":"traceutil/trace.go:171","msg":"trace[1811675749] linearizableReadLoop","detail":"{readStateIndex:402; appliedIndex:402; }","duration":"118.796807ms","start":"2024-04-20T00:47:18.637099Z","end":"2024-04-20T00:47:18.755896Z","steps":["trace[1811675749] 'read index received'  (duration: 118.790883ms)","trace[1811675749] 'applied index is now lower than readState.Index'  (duration: 4.701µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-20T00:47:18.789714Z","caller":"traceutil/trace.go:171","msg":"trace[1490804175] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-x7szp; range_end:; response_count:1; response_revision:389; }","duration":"308.029626ms","start":"2024-04-20T00:47:18.448578Z","end":"2024-04-20T00:47:18.756608Z","steps":["trace[1490804175] 'agreement among raft nodes before linearized reading'  (duration: 229.28301ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:19.14332Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:47:18.448539Z","time spent":"694.609452ms","remote":"127.0.0.1:48224","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":4934,"request content":"key:\"/registry/pods/kube-system/kindnet-x7szp\" "}
	{"level":"warn","ts":"2024-04-20T00:47:19.166679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"490.097119ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-7db6d8ff4d-pj8wd.17c7d681fb332ee5\" ","response":"range_response_count:1 size:844"}
	{"level":"info","ts":"2024-04-20T00:47:19.183579Z","caller":"traceutil/trace.go:171","msg":"trace[1385382603] range","detail":"{range_begin:/registry/events/kube-system/coredns-7db6d8ff4d-pj8wd.17c7d681fb332ee5; range_end:; response_count:1; response_revision:389; }","duration":"497.334583ms","start":"2024-04-20T00:47:18.676557Z","end":"2024-04-20T00:47:19.173891Z","steps":["trace[1385382603] 'agreement among raft nodes before linearized reading'  (duration: 490.031645ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:19.189018Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:47:18.676516Z","time spent":"512.471167ms","remote":"127.0.0.1:48100","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":868,"request content":"key:\"/registry/events/kube-system/coredns-7db6d8ff4d-pj8wd.17c7d681fb332ee5\" "}
	{"level":"warn","ts":"2024-04-20T00:47:19.186049Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"429.718807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-04-20T00:47:19.186088Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"508.974865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-20T00:47:19.191942Z","caller":"traceutil/trace.go:171","msg":"trace[2029096551] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:389; }","duration":"514.81594ms","start":"2024-04-20T00:47:18.677109Z","end":"2024-04-20T00:47:19.191925Z","steps":["trace[2029096551] 'agreement among raft nodes before linearized reading'  (duration: 508.967169ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:19.25392Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:47:18.677088Z","time spent":"576.805526ms","remote":"127.0.0.1:48534","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":0,"response size":29,"request content":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" "}
	{"level":"info","ts":"2024-04-20T00:47:19.224875Z","caller":"traceutil/trace.go:171","msg":"trace[1801637594] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:389; }","duration":"468.545511ms","start":"2024-04-20T00:47:18.756311Z","end":"2024-04-20T00:47:19.224857Z","steps":["trace[1801637594] 'agreement among raft nodes before linearized reading'  (duration: 429.691937ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T00:47:19.254181Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T00:47:18.756257Z","time spent":"497.915488ms","remote":"127.0.0.1:48514","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":0,"response size":29,"request content":"key:\"/registry/deployments/default/cloud-spanner-emulator\" "}
	
	
	==> gcp-auth [065eeb203edc3606ff24136ef272bf67f73b81ea9764ef0b86090be0bcf9d3e6] <==
	2024/04/20 00:48:57 GCP Auth Webhook started!
	2024/04/20 00:49:47 Ready to marshal response ...
	2024/04/20 00:49:47 Ready to write response ...
	2024/04/20 00:49:47 Ready to marshal response ...
	2024/04/20 00:49:47 Ready to write response ...
	2024/04/20 00:50:05 Ready to marshal response ...
	2024/04/20 00:50:05 Ready to write response ...
	2024/04/20 00:50:05 Ready to marshal response ...
	2024/04/20 00:50:05 Ready to write response ...
	2024/04/20 00:50:12 Ready to marshal response ...
	2024/04/20 00:50:12 Ready to write response ...
	2024/04/20 00:50:14 Ready to marshal response ...
	2024/04/20 00:50:14 Ready to write response ...
	
	
	==> kernel <==
	 00:50:37 up  7:33,  0 users,  load average: 1.12, 2.02, 2.30
	Linux addons-747503 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b21e49c0bda548c1eb5946f72ae33e703f302aefe2a8b624f2febca53de7bc52] <==
	I0420 00:48:30.528865       1 main.go:227] handling current node
	I0420 00:48:40.542591       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:48:40.542623       1 main.go:227] handling current node
	I0420 00:48:50.546336       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:48:50.546367       1 main.go:227] handling current node
	I0420 00:49:00.557776       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:49:00.557806       1 main.go:227] handling current node
	I0420 00:49:10.562295       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:49:10.562327       1 main.go:227] handling current node
	I0420 00:49:20.566636       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:49:20.566665       1 main.go:227] handling current node
	I0420 00:49:30.578236       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:49:30.578274       1 main.go:227] handling current node
	I0420 00:49:40.582769       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:49:40.582801       1 main.go:227] handling current node
	I0420 00:49:50.594123       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:49:50.594154       1 main.go:227] handling current node
	I0420 00:50:00.601043       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:50:00.601130       1 main.go:227] handling current node
	I0420 00:50:10.605381       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:50:10.605411       1 main.go:227] handling current node
	I0420 00:50:20.610397       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:50:20.610427       1 main.go:227] handling current node
	I0420 00:50:30.614665       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 00:50:30.614702       1 main.go:227] handling current node
	
	
	==> kube-apiserver [d7b31a1429803bbc1a1a428ba1eac2ef7f48e86e1cd6e2cf1c432bec1007e053] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0420 00:48:22.471711       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0420 00:49:01.186178       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.3.226:443: connect: connection refused
	W0420 00:49:01.186508       1 handler_proxy.go:93] no RequestInfo found in the context
	E0420 00:49:01.186618       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0420 00:49:01.187795       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.3.226:443: connect: connection refused
	E0420 00:49:01.193314       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.3.226:443: connect: connection refused
	E0420 00:49:01.214414       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.3.226:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.3.226:443: connect: connection refused
	I0420 00:49:01.426379       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	http2: server: error reading preface from client 192.168.49.1:38014: read tcp 192.168.49.2:8443->192.168.49.1:38014: read: connection reset by peer
	I0420 00:50:00.595117       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0420 00:50:28.670545       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:50:28.670597       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:50:28.715804       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:50:28.715853       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:50:28.748957       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:50:28.749037       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0420 00:50:28.834347       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0420 00:50:28.836069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0420 00:50:28.986860       1 watch.go:250] http2: stream closed
	W0420 00:50:29.716466       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0420 00:50:29.834147       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0420 00:50:29.854747       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0420 00:50:30.599412       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [120c278a1bb925ec4a1ee180446e9d193c7aedc259d9694dfffac91a03117d2e] <==
	I0420 00:49:53.657657       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="9.337µs"
	I0420 00:50:15.290231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-8d985888d" duration="216.674µs"
	I0420 00:50:22.153730       1 stateful_set.go:458] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0420 00:50:22.254210       1 stateful_set.go:458] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0420 00:50:28.874495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-745499f584" duration="4.324µs"
	E0420 00:50:29.718937       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:50:29.836695       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:50:29.856235       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:50:30.749688       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:50:30.749727       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:50:30.900031       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:50:30.900155       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:50:30.911871       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:50:30.911911       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:50:32.650378       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:50:32.650414       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:50:32.918819       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:50:32.918857       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:50:33.352741       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:50:33.352785       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0420 00:50:35.492251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-8677549d7" duration="6.646µs"
	W0420 00:50:37.344590       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:50:37.344627       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0420 00:50:37.549376       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0420 00:50:37.549414       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [8504f24d60ff9009f45e0bbb6d695589a6af2b97f09a02965f72cdc47fe2fe20] <==
	I0420 00:47:21.144641       1 server_linux.go:69] "Using iptables proxy"
	I0420 00:47:21.218151       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0420 00:47:21.752491       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0420 00:47:21.752619       1 server_linux.go:165] "Using iptables Proxier"
	I0420 00:47:21.778013       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0420 00:47:21.778127       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0420 00:47:21.778180       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 00:47:21.778432       1 server.go:872] "Version info" version="v1.30.0"
	I0420 00:47:21.778963       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 00:47:21.779916       1 config.go:192] "Starting service config controller"
	I0420 00:47:21.780016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 00:47:21.780072       1 config.go:101] "Starting endpoint slice config controller"
	I0420 00:47:21.780100       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 00:47:21.780654       1 config.go:319] "Starting node config controller"
	I0420 00:47:21.780711       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 00:47:21.880269       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0420 00:47:21.885308       1 shared_informer.go:320] Caches are synced for node config
	I0420 00:47:21.885552       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [efdbc1a5337c8ef08aa44a9a46119e00ebbce80a613213aa7ebe6169ce084929] <==
	W0420 00:46:59.940609       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:46:59.940660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:46:59.940762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0420 00:46:59.940813       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0420 00:46:59.940912       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 00:46:59.940952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 00:46:59.941040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 00:46:59.941188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 00:46:59.941145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 00:46:59.941293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0420 00:47:00.905621       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0420 00:47:00.905761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0420 00:47:00.906996       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 00:47:00.907089       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 00:47:00.942823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0420 00:47:00.942862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0420 00:47:00.951864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0420 00:47:00.952007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0420 00:47:01.030920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0420 00:47:01.031101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0420 00:47:01.032673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0420 00:47:01.032706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0420 00:47:01.039542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0420 00:47:01.039661       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0420 00:47:03.025762       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 20 00:50:29 addons-747503 kubelet[1518]: I0420 00:50:29.104832    1518 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a90bbeca-e4e7-4d3e-9eda-bf44e5d15f2c-kube-api-access-2jrmd" (OuterVolumeSpecName: "kube-api-access-2jrmd") pod "a90bbeca-e4e7-4d3e-9eda-bf44e5d15f2c" (UID: "a90bbeca-e4e7-4d3e-9eda-bf44e5d15f2c"). InnerVolumeSpecName "kube-api-access-2jrmd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 20 00:50:29 addons-747503 kubelet[1518]: I0420 00:50:29.202166    1518 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2jrmd\" (UniqueName: \"kubernetes.io/projected/a90bbeca-e4e7-4d3e-9eda-bf44e5d15f2c-kube-api-access-2jrmd\") on node \"addons-747503\" DevicePath \"\""
	Apr 20 00:50:29 addons-747503 kubelet[1518]: I0420 00:50:29.202205    1518 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fddj6\" (UniqueName: \"kubernetes.io/projected/1d82f222-8775-4214-b579-247919a249be-kube-api-access-fddj6\") on node \"addons-747503\" DevicePath \"\""
	Apr 20 00:50:29 addons-747503 kubelet[1518]: I0420 00:50:29.357904    1518 scope.go:117] "RemoveContainer" containerID="97e581274b9eaf6bdffbfc2dee9a0dbfa70878a170e9b1d5127d8e45553a3fa5"
	Apr 20 00:50:29 addons-747503 kubelet[1518]: E0420 00:50:29.358157    1518 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ec712066-7b44-45dc-a961-0f7688a75714)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ec712066-7b44-45dc-a961-0f7688a75714"
	Apr 20 00:50:29 addons-747503 kubelet[1518]: I0420 00:50:29.789743    1518 scope.go:117] "RemoveContainer" containerID="7aa4adc751b6d6c29502fd4163c593fb4bd687d4697b33f4e20d2e456c1f5d5d"
	Apr 20 00:50:29 addons-747503 kubelet[1518]: I0420 00:50:29.815689    1518 scope.go:117] "RemoveContainer" containerID="7aa4adc751b6d6c29502fd4163c593fb4bd687d4697b33f4e20d2e456c1f5d5d"
	Apr 20 00:50:29 addons-747503 kubelet[1518]: E0420 00:50:29.816113    1518 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aa4adc751b6d6c29502fd4163c593fb4bd687d4697b33f4e20d2e456c1f5d5d\": container with ID starting with 7aa4adc751b6d6c29502fd4163c593fb4bd687d4697b33f4e20d2e456c1f5d5d not found: ID does not exist" containerID="7aa4adc751b6d6c29502fd4163c593fb4bd687d4697b33f4e20d2e456c1f5d5d"
	Apr 20 00:50:29 addons-747503 kubelet[1518]: I0420 00:50:29.816196    1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa4adc751b6d6c29502fd4163c593fb4bd687d4697b33f4e20d2e456c1f5d5d"} err="failed to get container status \"7aa4adc751b6d6c29502fd4163c593fb4bd687d4697b33f4e20d2e456c1f5d5d\": rpc error: code = NotFound desc = could not find container \"7aa4adc751b6d6c29502fd4163c593fb4bd687d4697b33f4e20d2e456c1f5d5d\": container with ID starting with 7aa4adc751b6d6c29502fd4163c593fb4bd687d4697b33f4e20d2e456c1f5d5d not found: ID does not exist"
	Apr 20 00:50:29 addons-747503 kubelet[1518]: I0420 00:50:29.816229    1518 scope.go:117] "RemoveContainer" containerID="08e64ff964add136757c046c09ec66fa5fc7e948a28c9af513e22a628bf00501"
	Apr 20 00:50:29 addons-747503 kubelet[1518]: I0420 00:50:29.840765    1518 scope.go:117] "RemoveContainer" containerID="08e64ff964add136757c046c09ec66fa5fc7e948a28c9af513e22a628bf00501"
	Apr 20 00:50:29 addons-747503 kubelet[1518]: E0420 00:50:29.841260    1518 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08e64ff964add136757c046c09ec66fa5fc7e948a28c9af513e22a628bf00501\": container with ID starting with 08e64ff964add136757c046c09ec66fa5fc7e948a28c9af513e22a628bf00501 not found: ID does not exist" containerID="08e64ff964add136757c046c09ec66fa5fc7e948a28c9af513e22a628bf00501"
	Apr 20 00:50:29 addons-747503 kubelet[1518]: I0420 00:50:29.841293    1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e64ff964add136757c046c09ec66fa5fc7e948a28c9af513e22a628bf00501"} err="failed to get container status \"08e64ff964add136757c046c09ec66fa5fc7e948a28c9af513e22a628bf00501\": rpc error: code = NotFound desc = could not find container \"08e64ff964add136757c046c09ec66fa5fc7e948a28c9af513e22a628bf00501\": container with ID starting with 08e64ff964add136757c046c09ec66fa5fc7e948a28c9af513e22a628bf00501 not found: ID does not exist"
	Apr 20 00:50:30 addons-747503 kubelet[1518]: I0420 00:50:30.359200    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d82f222-8775-4214-b579-247919a249be" path="/var/lib/kubelet/pods/1d82f222-8775-4214-b579-247919a249be/volumes"
	Apr 20 00:50:30 addons-747503 kubelet[1518]: I0420 00:50:30.360085    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a90bbeca-e4e7-4d3e-9eda-bf44e5d15f2c" path="/var/lib/kubelet/pods/a90bbeca-e4e7-4d3e-9eda-bf44e5d15f2c/volumes"
	Apr 20 00:50:35 addons-747503 kubelet[1518]: I0420 00:50:35.814318    1518 scope.go:117] "RemoveContainer" containerID="e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20"
	Apr 20 00:50:35 addons-747503 kubelet[1518]: I0420 00:50:35.843551    1518 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z84zh\" (UniqueName: \"kubernetes.io/projected/48f65c56-a870-4d8d-b6d5-9e8070d92042-kube-api-access-z84zh\") pod \"48f65c56-a870-4d8d-b6d5-9e8070d92042\" (UID: \"48f65c56-a870-4d8d-b6d5-9e8070d92042\") "
	Apr 20 00:50:35 addons-747503 kubelet[1518]: I0420 00:50:35.850015    1518 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f65c56-a870-4d8d-b6d5-9e8070d92042-kube-api-access-z84zh" (OuterVolumeSpecName: "kube-api-access-z84zh") pod "48f65c56-a870-4d8d-b6d5-9e8070d92042" (UID: "48f65c56-a870-4d8d-b6d5-9e8070d92042"). InnerVolumeSpecName "kube-api-access-z84zh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 20 00:50:35 addons-747503 kubelet[1518]: I0420 00:50:35.852042    1518 scope.go:117] "RemoveContainer" containerID="e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20"
	Apr 20 00:50:35 addons-747503 kubelet[1518]: E0420 00:50:35.852457    1518 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20\": container with ID starting with e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20 not found: ID does not exist" containerID="e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20"
	Apr 20 00:50:35 addons-747503 kubelet[1518]: I0420 00:50:35.852497    1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20"} err="failed to get container status \"e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20\": rpc error: code = NotFound desc = could not find container \"e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20\": container with ID starting with e48bd1e0eee5484a24d42786860461d8af7d4af25fec40a358c500b01868cf20 not found: ID does not exist"
	Apr 20 00:50:35 addons-747503 kubelet[1518]: I0420 00:50:35.944076    1518 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-z84zh\" (UniqueName: \"kubernetes.io/projected/48f65c56-a870-4d8d-b6d5-9e8070d92042-kube-api-access-z84zh\") on node \"addons-747503\" DevicePath \"\""
	Apr 20 00:50:36 addons-747503 kubelet[1518]: I0420 00:50:36.360838    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f65c56-a870-4d8d-b6d5-9e8070d92042" path="/var/lib/kubelet/pods/48f65c56-a870-4d8d-b6d5-9e8070d92042/volumes"
	Apr 20 00:50:37 addons-747503 kubelet[1518]: I0420 00:50:37.358480    1518 scope.go:117] "RemoveContainer" containerID="19dc0b4cf1917780a27a2491c0a928911522287fc04ce99486af7ce3d47e1b81"
	Apr 20 00:50:37 addons-747503 kubelet[1518]: E0420 00:50:37.359466    1518 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-j48lz_gadget(1c6fda8f-82c7-43ad-8c7d-11de076291e3)\"" pod="gadget/gadget-j48lz" podUID="1c6fda8f-82c7-43ad-8c7d-11de076291e3"
	
	
	==> storage-provisioner [22c56a3e8a0fed567d434d23c22e5fb9e361b66b1c454f968e6ca7a6a7da876d] <==
	I0420 00:47:51.832097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0420 00:47:51.868002       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0420 00:47:51.868049       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0420 00:47:51.883550       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0420 00:47:51.884602       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69e938c5-ddcb-47d2-89e5-2e78c1a90077", APIVersion:"v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-747503_fa266937-e3d1-47aa-bd72-27b9ca80792a became leader
	I0420 00:47:51.887962       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-747503_fa266937-e3d1-47aa-bd72-27b9ca80792a!
	I0420 00:47:51.988837       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-747503_fa266937-e3d1-47aa-bd72-27b9ca80792a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-747503 -n addons-747503
helpers_test.go:261: (dbg) Run:  kubectl --context addons-747503 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-p788l ingress-nginx-admission-patch-zm4pk
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-747503 describe pod ingress-nginx-admission-create-p788l ingress-nginx-admission-patch-zm4pk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-747503 describe pod ingress-nginx-admission-create-p788l ingress-nginx-admission-patch-zm4pk: exit status 1 (78.899222ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-p788l" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zm4pk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-747503 describe pod ingress-nginx-admission-create-p788l ingress-nginx-admission-patch-zm4pk: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (3.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (123.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-159256 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0420 01:10:41.721211 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-159256 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m59.18213303s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE    VERSION
	ha-159256       NotReady   control-plane   10m    v1.30.0
	ha-159256-m02   Ready      control-plane   10m    v1.30.0
	ha-159256-m04   Ready      <none>          8m3s   v1.30.0

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-159256
helpers_test.go:235: (dbg) docker inspect ha-159256:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9432785ebd3e48b7cae35953ca8636442d5943b3ad3a724262492a22f74c77fd",
	        "Created": "2024-04-20T01:01:28.034194574Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1701773,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-20T01:10:36.909932899Z",
	            "FinishedAt": "2024-04-20T01:10:35.98197985Z"
	        },
	        "Image": "sha256:3b2d88ca3ca9b0dbaf60124ea2550b937bd64c7063d7cb640718ddb37cba13b1",
	        "ResolvConfPath": "/var/lib/docker/containers/9432785ebd3e48b7cae35953ca8636442d5943b3ad3a724262492a22f74c77fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9432785ebd3e48b7cae35953ca8636442d5943b3ad3a724262492a22f74c77fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/9432785ebd3e48b7cae35953ca8636442d5943b3ad3a724262492a22f74c77fd/hosts",
	        "LogPath": "/var/lib/docker/containers/9432785ebd3e48b7cae35953ca8636442d5943b3ad3a724262492a22f74c77fd/9432785ebd3e48b7cae35953ca8636442d5943b3ad3a724262492a22f74c77fd-json.log",
	        "Name": "/ha-159256",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-159256:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-159256",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/02d6f5411bc14fa0683981d3811363618c135a98c12894f1ae1adca511f67c45-init/diff:/var/lib/docker/overlay2/e0471a8635b1d2c4e15ee92afa46c7d34f76188a5b6aa3cb3689b7cec908b9a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02d6f5411bc14fa0683981d3811363618c135a98c12894f1ae1adca511f67c45/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02d6f5411bc14fa0683981d3811363618c135a98c12894f1ae1adca511f67c45/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02d6f5411bc14fa0683981d3811363618c135a98c12894f1ae1adca511f67c45/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-159256",
	                "Source": "/var/lib/docker/volumes/ha-159256/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-159256",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-159256",
	                "name.minikube.sigs.k8s.io": "ha-159256",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e132f18eb553add33eafd7b89ab37474d510329f4fe471e6cb248b8b315555ed",
	            "SandboxKey": "/var/run/docker/netns/e132f18eb553",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34735"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34731"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34733"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34732"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-159256": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "00c2cc3cc5aba79d7df63d82ad40adbd61b2e2302d8a0e9504d8ede24942d5cc",
	                    "EndpointID": "1c5fb3cf9359750b59a5cb2e2c1b7871b39fffddcb5a679e9b92ab44083951ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-159256",
	                        "9432785ebd3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-159256 -n ha-159256
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-159256 logs -n 25: (1.910800552s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-159256 cp ha-159256-m03:/home/docker/cp-test.txt                              | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | ha-159256-m04:/home/docker/cp-test_ha-159256-m03_ha-159256-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-159256 ssh -n                                                                 | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | ha-159256-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-159256 ssh -n ha-159256-m04 sudo cat                                          | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | /home/docker/cp-test_ha-159256-m03_ha-159256-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-159256 cp testdata/cp-test.txt                                                | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | ha-159256-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-159256 ssh -n                                                                 | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | ha-159256-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-159256 cp ha-159256-m04:/home/docker/cp-test.txt                              | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3336646767/001/cp-test_ha-159256-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-159256 ssh -n                                                                 | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | ha-159256-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-159256 cp ha-159256-m04:/home/docker/cp-test.txt                              | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | ha-159256:/home/docker/cp-test_ha-159256-m04_ha-159256.txt                       |           |         |         |                     |                     |
	| ssh     | ha-159256 ssh -n                                                                 | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | ha-159256-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-159256 ssh -n ha-159256 sudo cat                                              | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | /home/docker/cp-test_ha-159256-m04_ha-159256.txt                                 |           |         |         |                     |                     |
	| cp      | ha-159256 cp ha-159256-m04:/home/docker/cp-test.txt                              | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | ha-159256-m02:/home/docker/cp-test_ha-159256-m04_ha-159256-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-159256 ssh -n                                                                 | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | ha-159256-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-159256 ssh -n ha-159256-m02 sudo cat                                          | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | /home/docker/cp-test_ha-159256-m04_ha-159256-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-159256 cp ha-159256-m04:/home/docker/cp-test.txt                              | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | ha-159256-m03:/home/docker/cp-test_ha-159256-m04_ha-159256-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-159256 ssh -n                                                                 | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | ha-159256-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-159256 ssh -n ha-159256-m03 sudo cat                                          | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | /home/docker/cp-test_ha-159256-m04_ha-159256-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-159256 node stop m02 -v=7                                                     | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:05 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-159256 node start m02 -v=7                                                    | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:05 UTC | 20 Apr 24 01:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-159256 -v=7                                                           | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-159256 -v=7                                                                | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:06 UTC | 20 Apr 24 01:07 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-159256 --wait=true -v=7                                                    | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:07 UTC | 20 Apr 24 01:09 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-159256                                                                | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:09 UTC |                     |
	| node    | ha-159256 node delete m03 -v=7                                                   | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:09 UTC | 20 Apr 24 01:09 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-159256 stop -v=7                                                              | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:10 UTC | 20 Apr 24 01:10 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-159256 --wait=true                                                         | ha-159256 | jenkins | v1.33.0 | 20 Apr 24 01:10 UTC | 20 Apr 24 01:12 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 01:10:36
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 01:10:36.415831 1701586 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:10:36.416117 1701586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:10:36.416131 1701586 out.go:304] Setting ErrFile to fd 2...
	I0420 01:10:36.416137 1701586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:10:36.416410 1701586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	I0420 01:10:36.416807 1701586 out.go:298] Setting JSON to false
	I0420 01:10:36.417777 1701586 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28383,"bootTime":1713547053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0420 01:10:36.417853 1701586 start.go:139] virtualization:  
	I0420 01:10:36.420723 1701586 out.go:177] * [ha-159256] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0420 01:10:36.423874 1701586 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:10:36.426128 1701586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:10:36.423942 1701586 notify.go:220] Checking for updates...
	I0420 01:10:36.428444 1701586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 01:10:36.430413 1701586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	I0420 01:10:36.432948 1701586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0420 01:10:36.435074 1701586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:10:36.437602 1701586 config.go:182] Loaded profile config "ha-159256": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:10:36.438119 1701586 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:10:36.462977 1701586 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0420 01:10:36.463095 1701586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 01:10:36.526427 1701586 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:42 SystemTime:2024-04-20 01:10:36.517476412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 01:10:36.526557 1701586 docker.go:295] overlay module found
	I0420 01:10:36.529321 1701586 out.go:177] * Using the docker driver based on existing profile
	I0420 01:10:36.531617 1701586 start.go:297] selected driver: docker
	I0420 01:10:36.531639 1701586 start.go:901] validating driver "docker" against &{Name:ha-159256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-159256 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:fa
lse metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:10:36.531793 1701586 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:10:36.531900 1701586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 01:10:36.577346 1701586 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:42 SystemTime:2024-04-20 01:10:36.568162688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 01:10:36.577815 1701586 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:10:36.577872 1701586 cni.go:84] Creating CNI manager for ""
	I0420 01:10:36.577886 1701586 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0420 01:10:36.577934 1701586 start.go:340] cluster config:
	{Name:ha-159256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-159256 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-dri
ver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:10:36.580506 1701586 out.go:177] * Starting "ha-159256" primary control-plane node in "ha-159256" cluster
	I0420 01:10:36.582363 1701586 cache.go:121] Beginning downloading kic base image for docker with crio
	I0420 01:10:36.584466 1701586 out.go:177] * Pulling base image v0.0.43 ...
	I0420 01:10:36.586146 1701586 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:10:36.586203 1701586 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0420 01:10:36.586216 1701586 cache.go:56] Caching tarball of preloaded images
	I0420 01:10:36.586247 1701586 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local docker daemon
	I0420 01:10:36.586297 1701586 preload.go:173] Found /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0420 01:10:36.586307 1701586 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 01:10:36.586447 1701586 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/config.json ...
	I0420 01:10:36.601278 1701586 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local docker daemon, skipping pull
	I0420 01:10:36.601301 1701586 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 exists in daemon, skipping load
	I0420 01:10:36.601321 1701586 cache.go:194] Successfully downloaded all kic artifacts
	I0420 01:10:36.601350 1701586 start.go:360] acquireMachinesLock for ha-159256: {Name:mk985cfc27534644a21982e7b96ca274b1ec3fe9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:10:36.601426 1701586 start.go:364] duration metric: took 56.565µs to acquireMachinesLock for "ha-159256"
	I0420 01:10:36.601449 1701586 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:10:36.601462 1701586 fix.go:54] fixHost starting: 
	I0420 01:10:36.601788 1701586 cli_runner.go:164] Run: docker container inspect ha-159256 --format={{.State.Status}}
	I0420 01:10:36.616633 1701586 fix.go:112] recreateIfNeeded on ha-159256: state=Stopped err=<nil>
	W0420 01:10:36.616667 1701586 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:10:36.618853 1701586 out.go:177] * Restarting existing docker container for "ha-159256" ...
	I0420 01:10:36.620890 1701586 cli_runner.go:164] Run: docker start ha-159256
	I0420 01:10:36.917519 1701586 cli_runner.go:164] Run: docker container inspect ha-159256 --format={{.State.Status}}
	I0420 01:10:36.937059 1701586 kic.go:430] container "ha-159256" state is running.
	I0420 01:10:36.937438 1701586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159256
	I0420 01:10:36.960694 1701586 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/config.json ...
	I0420 01:10:36.960938 1701586 machine.go:94] provisionDockerMachine start ...
	I0420 01:10:36.960997 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256
	I0420 01:10:36.981653 1701586 main.go:141] libmachine: Using SSH client type: native
	I0420 01:10:36.981933 1701586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34735 <nil> <nil>}
	I0420 01:10:36.981942 1701586 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:10:36.982644 1701586 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0420 01:10:40.137439 1701586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-159256
	
	I0420 01:10:40.137466 1701586 ubuntu.go:169] provisioning hostname "ha-159256"
	I0420 01:10:40.137566 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256
	I0420 01:10:40.159855 1701586 main.go:141] libmachine: Using SSH client type: native
	I0420 01:10:40.160143 1701586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34735 <nil> <nil>}
	I0420 01:10:40.160161 1701586 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-159256 && echo "ha-159256" | sudo tee /etc/hostname
	I0420 01:10:40.317500 1701586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-159256
	
	I0420 01:10:40.317620 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256
	I0420 01:10:40.333830 1701586 main.go:141] libmachine: Using SSH client type: native
	I0420 01:10:40.334090 1701586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34735 <nil> <nil>}
	I0420 01:10:40.334113 1701586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-159256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-159256/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-159256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:10:40.477618 1701586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:10:40.477643 1701586 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18703-1638187/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-1638187/.minikube}
	I0420 01:10:40.477661 1701586 ubuntu.go:177] setting up certificates
	I0420 01:10:40.477672 1701586 provision.go:84] configureAuth start
	I0420 01:10:40.477732 1701586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159256
	I0420 01:10:40.493071 1701586 provision.go:143] copyHostCerts
	I0420 01:10:40.493115 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem
	I0420 01:10:40.493152 1701586 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem, removing ...
	I0420 01:10:40.493164 1701586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem
	I0420 01:10:40.493242 1701586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem (1123 bytes)
	I0420 01:10:40.493339 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem
	I0420 01:10:40.493364 1701586 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem, removing ...
	I0420 01:10:40.493372 1701586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem
	I0420 01:10:40.493402 1701586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem (1675 bytes)
	I0420 01:10:40.493456 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem
	I0420 01:10:40.493477 1701586 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem, removing ...
	I0420 01:10:40.493482 1701586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem
	I0420 01:10:40.493512 1701586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem (1082 bytes)
	I0420 01:10:40.493601 1701586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem org=jenkins.ha-159256 san=[127.0.0.1 192.168.49.2 ha-159256 localhost minikube]
	I0420 01:10:40.723970 1701586 provision.go:177] copyRemoteCerts
	I0420 01:10:40.724040 1701586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:10:40.724087 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256
	I0420 01:10:40.739334 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34735 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256/id_rsa Username:docker}
	I0420 01:10:40.838145 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0420 01:10:40.838251 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0420 01:10:40.861424 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0420 01:10:40.861605 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0420 01:10:40.885434 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0420 01:10:40.885496 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:10:40.908960 1701586 provision.go:87] duration metric: took 431.261194ms to configureAuth
	I0420 01:10:40.908991 1701586 ubuntu.go:193] setting minikube options for container-runtime
	I0420 01:10:40.909220 1701586 config.go:182] Loaded profile config "ha-159256": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:10:40.909328 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256
	I0420 01:10:40.924142 1701586 main.go:141] libmachine: Using SSH client type: native
	I0420 01:10:40.924405 1701586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34735 <nil> <nil>}
	I0420 01:10:40.924426 1701586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:10:41.328976 1701586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:10:41.328998 1701586 machine.go:97] duration metric: took 4.368050197s to provisionDockerMachine
	I0420 01:10:41.329010 1701586 start.go:293] postStartSetup for "ha-159256" (driver="docker")
	I0420 01:10:41.329021 1701586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:10:41.329103 1701586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:10:41.329148 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256
	I0420 01:10:41.352994 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34735 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256/id_rsa Username:docker}
	I0420 01:10:41.454484 1701586 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:10:41.457715 1701586 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0420 01:10:41.457753 1701586 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0420 01:10:41.457764 1701586 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0420 01:10:41.457771 1701586 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0420 01:10:41.457781 1701586 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-1638187/.minikube/addons for local assets ...
	I0420 01:10:41.457835 1701586 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-1638187/.minikube/files for local assets ...
	I0420 01:10:41.457918 1701586 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem -> 16436232.pem in /etc/ssl/certs
	I0420 01:10:41.457925 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem -> /etc/ssl/certs/16436232.pem
	I0420 01:10:41.458025 1701586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:10:41.467410 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem --> /etc/ssl/certs/16436232.pem (1708 bytes)
	I0420 01:10:41.492502 1701586 start.go:296] duration metric: took 163.476261ms for postStartSetup
	I0420 01:10:41.492649 1701586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 01:10:41.492716 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256
	I0420 01:10:41.508752 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34735 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256/id_rsa Username:docker}
	I0420 01:10:41.606607 1701586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0420 01:10:41.611427 1701586 fix.go:56] duration metric: took 5.009964597s for fixHost
	I0420 01:10:41.611453 1701586 start.go:83] releasing machines lock for "ha-159256", held for 5.0100182s
	I0420 01:10:41.611540 1701586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159256
	I0420 01:10:41.627051 1701586 ssh_runner.go:195] Run: cat /version.json
	I0420 01:10:41.627082 1701586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:10:41.627104 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256
	I0420 01:10:41.627136 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256
	I0420 01:10:41.643893 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34735 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256/id_rsa Username:docker}
	I0420 01:10:41.653667 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34735 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256/id_rsa Username:docker}
	I0420 01:10:41.740976 1701586 ssh_runner.go:195] Run: systemctl --version
	I0420 01:10:41.852119 1701586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:10:41.993384 1701586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0420 01:10:41.998002 1701586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:10:42.013867 1701586 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0420 01:10:42.013996 1701586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:10:42.023310 1701586 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0420 01:10:42.023335 1701586 start.go:494] detecting cgroup driver to use...
	I0420 01:10:42.023387 1701586 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0420 01:10:42.023444 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:10:42.035894 1701586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:10:42.048163 1701586 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:10:42.048248 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:10:42.061438 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:10:42.073996 1701586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:10:42.170870 1701586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:10:42.273804 1701586 docker.go:233] disabling docker service ...
	I0420 01:10:42.273876 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:10:42.286968 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:10:42.300579 1701586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:10:42.391763 1701586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:10:42.478398 1701586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:10:42.489594 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:10:42.505964 1701586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:10:42.506031 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:42.516567 1701586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:10:42.516639 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:42.527530 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:42.537297 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:42.547109 1701586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:10:42.556310 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:42.566478 1701586 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:42.575634 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:42.585780 1701586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:10:42.594666 1701586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:10:42.603281 1701586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:10:42.682466 1701586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:10:42.803738 1701586 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:10:42.803860 1701586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:10:42.807558 1701586 start.go:562] Will wait 60s for crictl version
	I0420 01:10:42.807641 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:10:42.811094 1701586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:10:42.855495 1701586 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0420 01:10:42.855651 1701586 ssh_runner.go:195] Run: crio --version
	I0420 01:10:42.900469 1701586 ssh_runner.go:195] Run: crio --version
	I0420 01:10:42.941089 1701586 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0420 01:10:42.943138 1701586 cli_runner.go:164] Run: docker network inspect ha-159256 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0420 01:10:42.955970 1701586 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0420 01:10:42.959624 1701586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:10:42.970550 1701586 kubeadm.go:877] updating cluster {Name:ha-159256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-159256 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:f
alse metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s} ...
	I0420 01:10:42.970708 1701586 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:10:42.970766 1701586 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:10:43.018539 1701586 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:10:43.018563 1701586 crio.go:433] Images already preloaded, skipping extraction
	I0420 01:10:43.018622 1701586 ssh_runner.go:195] Run: sudo crictl images --output json
	I0420 01:10:43.057298 1701586 crio.go:514] all images are preloaded for cri-o runtime.
	I0420 01:10:43.057370 1701586 cache_images.go:84] Images are preloaded, skipping loading
	I0420 01:10:43.057393 1701586 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0 crio true true} ...
	I0420 01:10:43.057578 1701586 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-159256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-159256 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:10:43.057711 1701586 ssh_runner.go:195] Run: crio config
	I0420 01:10:43.128138 1701586 cni.go:84] Creating CNI manager for ""
	I0420 01:10:43.128212 1701586 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0420 01:10:43.128239 1701586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0420 01:10:43.128295 1701586 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-159256 NodeName:ha-159256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0420 01:10:43.128494 1701586 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-159256"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0420 01:10:43.128545 1701586 kube-vip.go:111] generating kube-vip config ...
	I0420 01:10:43.128637 1701586 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0420 01:10:43.140889 1701586 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0420 01:10:43.141054 1701586 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0420 01:10:43.141155 1701586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:10:43.150532 1701586 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:10:43.150614 1701586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0420 01:10:43.159172 1701586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0420 01:10:43.176401 1701586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:10:43.194703 1701586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0420 01:10:43.212887 1701586 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0420 01:10:43.230672 1701586 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0420 01:10:43.234297 1701586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:10:43.244876 1701586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:10:43.328083 1701586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:10:43.341716 1701586 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256 for IP: 192.168.49.2
	I0420 01:10:43.341739 1701586 certs.go:194] generating shared ca certs ...
	I0420 01:10:43.341756 1701586 certs.go:226] acquiring lock for ca certs: {Name:mkf02d2bd3e0f29e12b7cec7c5b9a48566830288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:10:43.341892 1701586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key
	I0420 01:10:43.341946 1701586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key
	I0420 01:10:43.341957 1701586 certs.go:256] generating profile certs ...
	I0420 01:10:43.342046 1701586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/client.key
	I0420 01:10:43.342075 1701586 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.key.11e5dc0a
	I0420 01:10:43.342093 1701586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.crt.11e5dc0a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0420 01:10:44.008500 1701586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.crt.11e5dc0a ...
	I0420 01:10:44.008544 1701586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.crt.11e5dc0a: {Name:mk927df061a2244ab896fdd496d20ba6537c6778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:10:44.008756 1701586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.key.11e5dc0a ...
	I0420 01:10:44.008771 1701586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.key.11e5dc0a: {Name:mk4a66328aac660660ce9b53a08103f69db3a093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:10:44.008866 1701586 certs.go:381] copying /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.crt.11e5dc0a -> /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.crt
	I0420 01:10:44.009005 1701586 certs.go:385] copying /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.key.11e5dc0a -> /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.key
	I0420 01:10:44.009138 1701586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/proxy-client.key
	I0420 01:10:44.009163 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0420 01:10:44.009178 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0420 01:10:44.009194 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0420 01:10:44.009209 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0420 01:10:44.009224 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0420 01:10:44.009238 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0420 01:10:44.009252 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0420 01:10:44.009265 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0420 01:10:44.009318 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/1643623.pem (1338 bytes)
	W0420 01:10:44.009354 1701586 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/1643623_empty.pem, impossibly tiny 0 bytes
	I0420 01:10:44.009366 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem (1679 bytes)
	I0420 01:10:44.009392 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem (1082 bytes)
	I0420 01:10:44.009416 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:10:44.009441 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem (1675 bytes)
	I0420 01:10:44.009487 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem (1708 bytes)
	I0420 01:10:44.009521 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem -> /usr/share/ca-certificates/16436232.pem
	I0420 01:10:44.009583 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:10:44.009605 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/1643623.pem -> /usr/share/ca-certificates/1643623.pem
	I0420 01:10:44.010237 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:10:44.036411 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:10:44.061895 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:10:44.087422 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0420 01:10:44.111845 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0420 01:10:44.135785 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:10:44.159577 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:10:44.184120 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 01:10:44.208156 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem --> /usr/share/ca-certificates/16436232.pem (1708 bytes)
	I0420 01:10:44.231771 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:10:44.255555 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/1643623.pem --> /usr/share/ca-certificates/1643623.pem (1338 bytes)
	I0420 01:10:44.279513 1701586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0420 01:10:44.297349 1701586 ssh_runner.go:195] Run: openssl version
	I0420 01:10:44.302938 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16436232.pem && ln -fs /usr/share/ca-certificates/16436232.pem /etc/ssl/certs/16436232.pem"
	I0420 01:10:44.312766 1701586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16436232.pem
	I0420 01:10:44.316338 1701586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:57 /usr/share/ca-certificates/16436232.pem
	I0420 01:10:44.316429 1701586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16436232.pem
	I0420 01:10:44.323398 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16436232.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:10:44.332450 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:10:44.341523 1701586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:10:44.345050 1701586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:46 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:10:44.345118 1701586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:10:44.352328 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:10:44.361123 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1643623.pem && ln -fs /usr/share/ca-certificates/1643623.pem /etc/ssl/certs/1643623.pem"
	I0420 01:10:44.370815 1701586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1643623.pem
	I0420 01:10:44.374375 1701586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:57 /usr/share/ca-certificates/1643623.pem
	I0420 01:10:44.374473 1701586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1643623.pem
	I0420 01:10:44.381766 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1643623.pem /etc/ssl/certs/51391683.0"
	I0420 01:10:44.390689 1701586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:10:44.394100 1701586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:10:44.400941 1701586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:10:44.407924 1701586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:10:44.414870 1701586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:10:44.421682 1701586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:10:44.428701 1701586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:10:44.435572 1701586 kubeadm.go:391] StartCluster: {Name:ha-159256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-159256 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:fals
e metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I0420 01:10:44.435710 1701586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0420 01:10:44.435769 1701586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0420 01:10:44.477407 1701586 cri.go:89] found id: ""
	I0420 01:10:44.477502 1701586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0420 01:10:44.486167 1701586 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0420 01:10:44.486193 1701586 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0420 01:10:44.486199 1701586 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0420 01:10:44.486246 1701586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0420 01:10:44.494369 1701586 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0420 01:10:44.494845 1701586 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-159256" does not appear in /home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 01:10:44.494969 1701586 kubeconfig.go:62] /home/jenkins/minikube-integration/18703-1638187/kubeconfig needs updating (will repair): [kubeconfig missing "ha-159256" cluster setting kubeconfig missing "ha-159256" context setting]
	I0420 01:10:44.495300 1701586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/kubeconfig: {Name:mk33979dc7705003abaa608c8031c04a91a05c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:10:44.495722 1701586 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 01:10:44.495967 1701586 kapi.go:59] client config for ha-159256: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/client.crt", KeyFile:"/home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/client.key", CAFile:"/home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17a1410), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0420 01:10:44.496418 1701586 cert_rotation.go:137] Starting client certificate rotation controller
	I0420 01:10:44.497898 1701586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0420 01:10:44.506535 1701586 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0420 01:10:44.506600 1701586 kubeadm.go:591] duration metric: took 20.395564ms to restartPrimaryControlPlane
	I0420 01:10:44.506619 1701586 kubeadm.go:393] duration metric: took 71.054935ms to StartCluster
	I0420 01:10:44.506634 1701586 settings.go:142] acquiring lock: {Name:mk38dc124731a3de0f512758a89f5557db305d6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:10:44.506705 1701586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 01:10:44.507311 1701586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/kubeconfig: {Name:mk33979dc7705003abaa608c8031c04a91a05c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:10:44.507510 1701586 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:10:44.507539 1701586 start.go:240] waiting for startup goroutines ...
	I0420 01:10:44.507554 1701586 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0420 01:10:44.511966 1701586 out.go:177] * Enabled addons: 
	I0420 01:10:44.508073 1701586 config.go:182] Loaded profile config "ha-159256": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:10:44.513820 1701586 addons.go:505] duration metric: took 6.264098ms for enable addons: enabled=[]
	I0420 01:10:44.513870 1701586 start.go:245] waiting for cluster config update ...
	I0420 01:10:44.513881 1701586 start.go:254] writing updated cluster config ...
	I0420 01:10:44.516051 1701586 out.go:177] 
	I0420 01:10:44.518406 1701586 config.go:182] Loaded profile config "ha-159256": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:10:44.518542 1701586 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/config.json ...
	I0420 01:10:44.520921 1701586 out.go:177] * Starting "ha-159256-m02" control-plane node in "ha-159256" cluster
	I0420 01:10:44.522943 1701586 cache.go:121] Beginning downloading kic base image for docker with crio
	I0420 01:10:44.524967 1701586 out.go:177] * Pulling base image v0.0.43 ...
	I0420 01:10:44.526896 1701586 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:10:44.526931 1701586 cache.go:56] Caching tarball of preloaded images
	I0420 01:10:44.526978 1701586 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local docker daemon
	I0420 01:10:44.527058 1701586 preload.go:173] Found /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0420 01:10:44.527077 1701586 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 01:10:44.527229 1701586 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/config.json ...
	I0420 01:10:44.540187 1701586 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local docker daemon, skipping pull
	I0420 01:10:44.540215 1701586 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 exists in daemon, skipping load
	I0420 01:10:44.540235 1701586 cache.go:194] Successfully downloaded all kic artifacts
	I0420 01:10:44.540263 1701586 start.go:360] acquireMachinesLock for ha-159256-m02: {Name:mkf18059c36f7a81d03edcf6d0a7452936898ff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:10:44.540330 1701586 start.go:364] duration metric: took 44.323µs to acquireMachinesLock for "ha-159256-m02"
	I0420 01:10:44.540355 1701586 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:10:44.540366 1701586 fix.go:54] fixHost starting: m02
	I0420 01:10:44.540647 1701586 cli_runner.go:164] Run: docker container inspect ha-159256-m02 --format={{.State.Status}}
	I0420 01:10:44.555772 1701586 fix.go:112] recreateIfNeeded on ha-159256-m02: state=Stopped err=<nil>
	W0420 01:10:44.555798 1701586 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:10:44.560050 1701586 out.go:177] * Restarting existing docker container for "ha-159256-m02" ...
	I0420 01:10:44.562065 1701586 cli_runner.go:164] Run: docker start ha-159256-m02
	I0420 01:10:44.846148 1701586 cli_runner.go:164] Run: docker container inspect ha-159256-m02 --format={{.State.Status}}
	I0420 01:10:44.864131 1701586 kic.go:430] container "ha-159256-m02" state is running.
	I0420 01:10:44.864515 1701586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159256-m02
	I0420 01:10:44.884151 1701586 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/config.json ...
	I0420 01:10:44.884644 1701586 machine.go:94] provisionDockerMachine start ...
	I0420 01:10:44.884711 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m02
	I0420 01:10:44.910519 1701586 main.go:141] libmachine: Using SSH client type: native
	I0420 01:10:44.910768 1701586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34740 <nil> <nil>}
	I0420 01:10:44.910777 1701586 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:10:44.911546 1701586 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0420 01:10:48.110546 1701586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-159256-m02
	
	I0420 01:10:48.110569 1701586 ubuntu.go:169] provisioning hostname "ha-159256-m02"
	I0420 01:10:48.110639 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m02
	I0420 01:10:48.148599 1701586 main.go:141] libmachine: Using SSH client type: native
	I0420 01:10:48.148833 1701586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34740 <nil> <nil>}
	I0420 01:10:48.148844 1701586 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-159256-m02 && echo "ha-159256-m02" | sudo tee /etc/hostname
	I0420 01:10:48.384252 1701586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-159256-m02
	
	I0420 01:10:48.384418 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m02
	I0420 01:10:48.418600 1701586 main.go:141] libmachine: Using SSH client type: native
	I0420 01:10:48.418840 1701586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34740 <nil> <nil>}
	I0420 01:10:48.418857 1701586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-159256-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-159256-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-159256-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:10:48.618951 1701586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:10:48.619022 1701586 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18703-1638187/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-1638187/.minikube}
	I0420 01:10:48.619052 1701586 ubuntu.go:177] setting up certificates
	I0420 01:10:48.619097 1701586 provision.go:84] configureAuth start
	I0420 01:10:48.619213 1701586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159256-m02
	I0420 01:10:48.661733 1701586 provision.go:143] copyHostCerts
	I0420 01:10:48.661782 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem
	I0420 01:10:48.661818 1701586 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem, removing ...
	I0420 01:10:48.661830 1701586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem
	I0420 01:10:48.661915 1701586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem (1082 bytes)
	I0420 01:10:48.662005 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem
	I0420 01:10:48.662028 1701586 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem, removing ...
	I0420 01:10:48.662037 1701586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem
	I0420 01:10:48.662066 1701586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem (1123 bytes)
	I0420 01:10:48.662121 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem
	I0420 01:10:48.662142 1701586 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem, removing ...
	I0420 01:10:48.662147 1701586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem
	I0420 01:10:48.662182 1701586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem (1675 bytes)
	I0420 01:10:48.662240 1701586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem org=jenkins.ha-159256-m02 san=[127.0.0.1 192.168.49.3 ha-159256-m02 localhost minikube]
	I0420 01:10:49.210597 1701586 provision.go:177] copyRemoteCerts
	I0420 01:10:49.210673 1701586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:10:49.210717 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m02
	I0420 01:10:49.226590 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34740 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256-m02/id_rsa Username:docker}
	I0420 01:10:49.326527 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0420 01:10:49.326644 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0420 01:10:49.352629 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0420 01:10:49.352701 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0420 01:10:49.377809 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0420 01:10:49.377875 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0420 01:10:49.403966 1701586 provision.go:87] duration metric: took 784.817258ms to configureAuth
	I0420 01:10:49.404042 1701586 ubuntu.go:193] setting minikube options for container-runtime
	I0420 01:10:49.404293 1701586 config.go:182] Loaded profile config "ha-159256": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:10:49.404408 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m02
	I0420 01:10:49.419551 1701586 main.go:141] libmachine: Using SSH client type: native
	I0420 01:10:49.419801 1701586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34740 <nil> <nil>}
	I0420 01:10:49.419821 1701586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:10:49.792633 1701586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:10:49.792667 1701586 machine.go:97] duration metric: took 4.908009287s to provisionDockerMachine
	I0420 01:10:49.792679 1701586 start.go:293] postStartSetup for "ha-159256-m02" (driver="docker")
	I0420 01:10:49.792691 1701586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:10:49.792770 1701586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:10:49.792836 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m02
	I0420 01:10:49.809303 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34740 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256-m02/id_rsa Username:docker}
	I0420 01:10:49.977240 1701586 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:10:49.983210 1701586 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0420 01:10:49.983249 1701586 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0420 01:10:49.983274 1701586 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0420 01:10:49.983283 1701586 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0420 01:10:49.983294 1701586 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-1638187/.minikube/addons for local assets ...
	I0420 01:10:49.983356 1701586 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-1638187/.minikube/files for local assets ...
	I0420 01:10:49.983441 1701586 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem -> 16436232.pem in /etc/ssl/certs
	I0420 01:10:49.983453 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem -> /etc/ssl/certs/16436232.pem
	I0420 01:10:49.983563 1701586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:10:50.023835 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem --> /etc/ssl/certs/16436232.pem (1708 bytes)
	I0420 01:10:50.078800 1701586 start.go:296] duration metric: took 286.104914ms for postStartSetup
	I0420 01:10:50.078906 1701586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 01:10:50.078968 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m02
	I0420 01:10:50.108153 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34740 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256-m02/id_rsa Username:docker}
	I0420 01:10:50.283312 1701586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0420 01:10:50.306963 1701586 fix.go:56] duration metric: took 5.766589029s for fixHost
	I0420 01:10:50.306987 1701586 start.go:83] releasing machines lock for "ha-159256-m02", held for 5.766645241s
	I0420 01:10:50.307057 1701586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159256-m02
	I0420 01:10:50.325914 1701586 out.go:177] * Found network options:
	I0420 01:10:50.328435 1701586 out.go:177]   - NO_PROXY=192.168.49.2
	W0420 01:10:50.330684 1701586 proxy.go:119] fail to check proxy env: Error ip not in block
	W0420 01:10:50.330723 1701586 proxy.go:119] fail to check proxy env: Error ip not in block
	I0420 01:10:50.330793 1701586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:10:50.330841 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m02
	I0420 01:10:50.331058 1701586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:10:50.331114 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m02
	I0420 01:10:50.361316 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34740 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256-m02/id_rsa Username:docker}
	I0420 01:10:50.362315 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34740 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256-m02/id_rsa Username:docker}
	I0420 01:10:50.804059 1701586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0420 01:10:50.882105 1701586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:10:50.920156 1701586 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0420 01:10:50.920248 1701586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:10:50.959305 1701586 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0420 01:10:50.959353 1701586 start.go:494] detecting cgroup driver to use...
	I0420 01:10:50.959385 1701586 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0420 01:10:50.959449 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:10:51.001797 1701586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:10:51.035378 1701586 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:10:51.035456 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:10:51.083898 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:10:51.135252 1701586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:10:51.441677 1701586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:10:51.762232 1701586 docker.go:233] disabling docker service ...
	I0420 01:10:51.762345 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:10:51.812244 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:10:51.864583 1701586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:10:52.134924 1701586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:10:52.402642 1701586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:10:52.450772 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:10:52.511137 1701586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:10:52.511215 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:52.558618 1701586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:10:52.558698 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:52.606621 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:52.648803 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:52.709076 1701586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:10:52.759728 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:52.800412 1701586 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:52.817772 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:10:52.840630 1701586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:10:52.876031 1701586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:10:52.920534 1701586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:10:53.194565 1701586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:10:53.685927 1701586 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:10:53.686007 1701586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:10:53.691950 1701586 start.go:562] Will wait 60s for crictl version
	I0420 01:10:53.692028 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:10:53.701967 1701586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:10:53.775661 1701586 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0420 01:10:53.775747 1701586 ssh_runner.go:195] Run: crio --version
	I0420 01:10:53.848507 1701586 ssh_runner.go:195] Run: crio --version
	I0420 01:10:53.992610 1701586 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0420 01:10:53.994552 1701586 out.go:177]   - env NO_PROXY=192.168.49.2
	I0420 01:10:53.997017 1701586 cli_runner.go:164] Run: docker network inspect ha-159256 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0420 01:10:54.021075 1701586 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0420 01:10:54.034656 1701586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:10:54.079037 1701586 mustload.go:65] Loading cluster: ha-159256
	I0420 01:10:54.079278 1701586 config.go:182] Loaded profile config "ha-159256": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:10:54.079546 1701586 cli_runner.go:164] Run: docker container inspect ha-159256 --format={{.State.Status}}
	I0420 01:10:54.101922 1701586 host.go:66] Checking if "ha-159256" exists ...
	I0420 01:10:54.102195 1701586 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256 for IP: 192.168.49.3
	I0420 01:10:54.102203 1701586 certs.go:194] generating shared ca certs ...
	I0420 01:10:54.102217 1701586 certs.go:226] acquiring lock for ca certs: {Name:mkf02d2bd3e0f29e12b7cec7c5b9a48566830288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:10:54.102323 1701586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key
	I0420 01:10:54.102378 1701586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key
	I0420 01:10:54.102390 1701586 certs.go:256] generating profile certs ...
	I0420 01:10:54.102467 1701586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/client.key
	I0420 01:10:54.102535 1701586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.key.2ce3c0b7
	I0420 01:10:54.102577 1701586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/proxy-client.key
	I0420 01:10:54.102591 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0420 01:10:54.102604 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0420 01:10:54.102622 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0420 01:10:54.102640 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0420 01:10:54.102651 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0420 01:10:54.102667 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0420 01:10:54.102684 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0420 01:10:54.102701 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0420 01:10:54.102760 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/1643623.pem (1338 bytes)
	W0420 01:10:54.102792 1701586 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/1643623_empty.pem, impossibly tiny 0 bytes
	I0420 01:10:54.102805 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem (1679 bytes)
	I0420 01:10:54.102830 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem (1082 bytes)
	I0420 01:10:54.102857 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:10:54.102887 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem (1675 bytes)
	I0420 01:10:54.102934 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem (1708 bytes)
	I0420 01:10:54.102966 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/1643623.pem -> /usr/share/ca-certificates/1643623.pem
	I0420 01:10:54.102984 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem -> /usr/share/ca-certificates/16436232.pem
	I0420 01:10:54.103001 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:10:54.103055 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256
	I0420 01:10:54.127164 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34735 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256/id_rsa Username:docker}
	I0420 01:10:54.249828 1701586 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0420 01:10:54.258695 1701586 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0420 01:10:54.277586 1701586 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0420 01:10:54.281880 1701586 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0420 01:10:54.299161 1701586 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0420 01:10:54.303455 1701586 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0420 01:10:54.317737 1701586 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0420 01:10:54.321769 1701586 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0420 01:10:54.336709 1701586 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0420 01:10:54.341540 1701586 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0420 01:10:54.354821 1701586 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0420 01:10:54.358817 1701586 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0420 01:10:54.371684 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:10:54.413752 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:10:54.455709 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:10:54.491481 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0420 01:10:54.541167 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0420 01:10:54.577185 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0420 01:10:54.622622 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0420 01:10:54.662786 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0420 01:10:54.703707 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/1643623.pem --> /usr/share/ca-certificates/1643623.pem (1338 bytes)
	I0420 01:10:54.736759 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem --> /usr/share/ca-certificates/16436232.pem (1708 bytes)
	I0420 01:10:54.767658 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:10:54.801573 1701586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0420 01:10:54.841791 1701586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0420 01:10:54.864948 1701586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0420 01:10:54.894136 1701586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0420 01:10:54.925216 1701586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0420 01:10:54.963191 1701586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0420 01:10:54.995802 1701586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0420 01:10:55.028822 1701586 ssh_runner.go:195] Run: openssl version
	I0420 01:10:55.037088 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1643623.pem && ln -fs /usr/share/ca-certificates/1643623.pem /etc/ssl/certs/1643623.pem"
	I0420 01:10:55.055489 1701586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1643623.pem
	I0420 01:10:55.061119 1701586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:57 /usr/share/ca-certificates/1643623.pem
	I0420 01:10:55.061183 1701586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1643623.pem
	I0420 01:10:55.077962 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1643623.pem /etc/ssl/certs/51391683.0"
	I0420 01:10:55.091417 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16436232.pem && ln -fs /usr/share/ca-certificates/16436232.pem /etc/ssl/certs/16436232.pem"
	I0420 01:10:55.103099 1701586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16436232.pem
	I0420 01:10:55.107562 1701586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:57 /usr/share/ca-certificates/16436232.pem
	I0420 01:10:55.107702 1701586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16436232.pem
	I0420 01:10:55.120429 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16436232.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:10:55.134421 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:10:55.149679 1701586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:10:55.157096 1701586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:46 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:10:55.157210 1701586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:10:55.167201 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:10:55.178266 1701586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:10:55.182352 1701586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0420 01:10:55.189867 1701586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0420 01:10:55.198200 1701586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0420 01:10:55.212799 1701586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0420 01:10:55.220683 1701586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0420 01:10:55.228071 1701586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0420 01:10:55.235948 1701586 kubeadm.go:928] updating node {m02 192.168.49.3 8443 v1.30.0 crio true true} ...
	I0420 01:10:55.236115 1701586 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-159256-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-159256 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:10:55.236163 1701586 kube-vip.go:111] generating kube-vip config ...
	I0420 01:10:55.236243 1701586 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0420 01:10:55.250010 1701586 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0420 01:10:55.250128 1701586 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0420 01:10:55.250217 1701586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:10:55.266072 1701586 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:10:55.266223 1701586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0420 01:10:55.275589 1701586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0420 01:10:55.295406 1701586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:10:55.315418 1701586 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0420 01:10:55.343229 1701586 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0420 01:10:55.346871 1701586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:10:55.358445 1701586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:10:55.534691 1701586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:10:55.548901 1701586 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0420 01:10:55.553884 1701586 out.go:177] * Verifying Kubernetes components...
	I0420 01:10:55.549272 1701586 config.go:182] Loaded profile config "ha-159256": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:10:55.556119 1701586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:10:55.723411 1701586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:10:55.738437 1701586 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 01:10:55.738712 1701586 kapi.go:59] client config for ha-159256: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/client.crt", KeyFile:"/home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/client.key", CAFile:"/home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17a1410), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0420 01:10:55.738770 1701586 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0420 01:10:55.738978 1701586 node_ready.go:35] waiting up to 6m0s for node "ha-159256-m02" to be "Ready" ...
	I0420 01:10:55.739055 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:10:55.739060 1701586 round_trippers.go:469] Request Headers:
	I0420 01:10:55.739069 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:10:55.739074 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:07.796542 1701586 round_trippers.go:574] Response Status: 500 Internal Server Error in 12057 milliseconds
	I0420 01:11:07.801774 1701586 node_ready.go:53] error getting node "ha-159256-m02": etcdserver: request timed out
	I0420 01:11:07.801853 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:11:07.801859 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:07.801867 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:07.801871 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:14.805386 1701586 round_trippers.go:574] Response Status: 500 Internal Server Error in 7003 milliseconds
	I0420 01:11:14.805694 1701586 node_ready.go:53] error getting node "ha-159256-m02": etcdserver: request timed out
	I0420 01:11:14.805756 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:11:14.805760 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:14.805768 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:14.805774 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:18.862954 1701586 round_trippers.go:574] Response Status: 200 OK in 4057 milliseconds
	I0420 01:11:18.864511 1701586 node_ready.go:49] node "ha-159256-m02" has status "Ready":"True"
	I0420 01:11:18.864540 1701586 node_ready.go:38] duration metric: took 23.125541956s for node "ha-159256-m02" to be "Ready" ...
	I0420 01:11:18.864552 1701586 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:11:18.864624 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0420 01:11:18.864635 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:18.864644 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:18.864647 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:18.901222 1701586 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0420 01:11:18.920094 1701586 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f6s2n" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:18.920271 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f6s2n
	I0420 01:11:18.920285 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:18.920295 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:18.920301 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:18.927737 1701586 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0420 01:11:18.928717 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:11:18.928740 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:18.928749 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:18.928761 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:18.935121 1701586 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 01:11:18.936104 1701586 pod_ready.go:92] pod "coredns-7db6d8ff4d-f6s2n" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:18.936129 1701586 pod_ready.go:81] duration metric: took 15.998931ms for pod "coredns-7db6d8ff4d-f6s2n" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:18.936140 1701586 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h2b7f" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:18.936212 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-h2b7f
	I0420 01:11:18.936223 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:18.936232 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:18.936238 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:18.939568 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:11:18.940777 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:11:18.940797 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:18.940805 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:18.940810 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:18.943588 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:18.944635 1701586 pod_ready.go:92] pod "coredns-7db6d8ff4d-h2b7f" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:18.944684 1701586 pod_ready.go:81] duration metric: took 8.530528ms for pod "coredns-7db6d8ff4d-h2b7f" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:18.944719 1701586 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-159256" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:18.944811 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-159256
	I0420 01:11:18.944844 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:18.944865 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:18.944886 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:18.954121 1701586 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0420 01:11:18.955261 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:11:18.955304 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:18.955340 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:18.955365 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:18.962480 1701586 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0420 01:11:18.963320 1701586 pod_ready.go:92] pod "etcd-ha-159256" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:18.963341 1701586 pod_ready.go:81] duration metric: took 18.602943ms for pod "etcd-ha-159256" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:18.963354 1701586 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:18.963425 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-159256-m02
	I0420 01:11:18.963435 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:18.963444 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:18.963448 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:18.967477 1701586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 01:11:18.968201 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:11:18.968221 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:18.968229 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:18.968234 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:18.970405 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:18.971222 1701586 pod_ready.go:92] pod "etcd-ha-159256-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:18.971246 1701586 pod_ready.go:81] duration metric: took 7.881767ms for pod "etcd-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:18.971258 1701586 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-159256-m03" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:18.971319 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-159256-m03
	I0420 01:11:18.971329 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:18.971337 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:18.971341 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:18.976030 1701586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 01:11:19.065205 1701586 request.go:629] Waited for 88.201141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256-m03
	I0420 01:11:19.065299 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m03
	I0420 01:11:19.065313 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:19.065322 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:19.065326 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:19.067962 1701586 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0420 01:11:19.068309 1701586 pod_ready.go:97] node "ha-159256-m03" hosting pod "etcd-ha-159256-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-159256-m03": nodes "ha-159256-m03" not found
	I0420 01:11:19.068333 1701586 pod_ready.go:81] duration metric: took 97.067208ms for pod "etcd-ha-159256-m03" in "kube-system" namespace to be "Ready" ...
	E0420 01:11:19.068344 1701586 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-159256-m03" hosting pod "etcd-ha-159256-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-159256-m03": nodes "ha-159256-m03" not found
	I0420 01:11:19.068373 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-159256" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:19.264700 1701586 request.go:629] Waited for 196.243153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:11:19.264768 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:11:19.264794 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:19.264808 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:19.264812 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:19.269140 1701586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 01:11:19.464683 1701586 request.go:629] Waited for 193.266633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:11:19.464788 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:11:19.464810 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:19.464883 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:19.464905 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:19.467663 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:19.468641 1701586 pod_ready.go:92] pod "kube-apiserver-ha-159256" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:19.468700 1701586 pod_ready.go:81] duration metric: took 400.295206ms for pod "kube-apiserver-ha-159256" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:19.468727 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:19.665599 1701586 request.go:629] Waited for 196.779705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256-m02
	I0420 01:11:19.665707 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256-m02
	I0420 01:11:19.665770 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:19.665797 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:19.665820 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:19.674934 1701586 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0420 01:11:19.865650 1701586 request.go:629] Waited for 189.184814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:11:19.865758 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:11:19.865797 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:19.865833 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:19.865854 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:19.913158 1701586 round_trippers.go:574] Response Status: 200 OK in 47 milliseconds
	I0420 01:11:19.916784 1701586 pod_ready.go:92] pod "kube-apiserver-ha-159256-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:19.916854 1701586 pod_ready.go:81] duration metric: took 448.106719ms for pod "kube-apiserver-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:19.916881 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-159256-m03" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:20.065615 1701586 request.go:629] Waited for 148.636453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256-m03
	I0420 01:11:20.065774 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256-m03
	I0420 01:11:20.065810 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:20.065848 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:20.065879 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:20.071825 1701586 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 01:11:20.265491 1701586 request.go:629] Waited for 192.331554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256-m03
	I0420 01:11:20.265627 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m03
	I0420 01:11:20.265696 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:20.265723 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:20.265750 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:20.268851 1701586 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0420 01:11:20.269165 1701586 pod_ready.go:97] node "ha-159256-m03" hosting pod "kube-apiserver-ha-159256-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-159256-m03": nodes "ha-159256-m03" not found
	I0420 01:11:20.269194 1701586 pod_ready.go:81] duration metric: took 352.291931ms for pod "kube-apiserver-ha-159256-m03" in "kube-system" namespace to be "Ready" ...
	E0420 01:11:20.269225 1701586 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-159256-m03" hosting pod "kube-apiserver-ha-159256-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-159256-m03": nodes "ha-159256-m03" not found
	I0420 01:11:20.269235 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-159256" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:20.465517 1701586 request.go:629] Waited for 196.209555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-159256
	I0420 01:11:20.465688 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-159256
	I0420 01:11:20.465726 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:20.465747 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:20.465765 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:20.468701 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:20.665191 1701586 request.go:629] Waited for 195.334897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:11:20.665276 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:11:20.665288 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:20.665298 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:20.665312 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:20.668066 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:20.668935 1701586 pod_ready.go:92] pod "kube-controller-manager-ha-159256" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:20.668957 1701586 pod_ready.go:81] duration metric: took 399.707696ms for pod "kube-controller-manager-ha-159256" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:20.668968 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:20.865180 1701586 request.go:629] Waited for 196.126283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-159256-m02
	I0420 01:11:20.865284 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-159256-m02
	I0420 01:11:20.865298 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:20.865308 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:20.865312 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:20.867833 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:21.065284 1701586 request.go:629] Waited for 196.318057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:11:21.065365 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:11:21.065378 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:21.065387 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:21.065393 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:21.068019 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:21.068651 1701586 pod_ready.go:92] pod "kube-controller-manager-ha-159256-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:21.068674 1701586 pod_ready.go:81] duration metric: took 399.676197ms for pod "kube-controller-manager-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:21.068687 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-159256-m03" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:21.265126 1701586 request.go:629] Waited for 196.350293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-159256-m03
	I0420 01:11:21.265188 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-159256-m03
	I0420 01:11:21.265194 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:21.265203 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:21.265209 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:21.268023 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:21.465375 1701586 request.go:629] Waited for 196.468107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256-m03
	I0420 01:11:21.465438 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m03
	I0420 01:11:21.465449 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:21.465464 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:21.465471 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:21.467977 1701586 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0420 01:11:21.468092 1701586 pod_ready.go:97] node "ha-159256-m03" hosting pod "kube-controller-manager-ha-159256-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-159256-m03": nodes "ha-159256-m03" not found
	I0420 01:11:21.468110 1701586 pod_ready.go:81] duration metric: took 399.4125ms for pod "kube-controller-manager-ha-159256-m03" in "kube-system" namespace to be "Ready" ...
	E0420 01:11:21.468121 1701586 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-159256-m03" hosting pod "kube-controller-manager-ha-159256-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-159256-m03": nodes "ha-159256-m03" not found
	I0420 01:11:21.468129 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5f79r" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:21.665411 1701586 request.go:629] Waited for 197.211988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5f79r
	I0420 01:11:21.665507 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5f79r
	I0420 01:11:21.665524 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:21.665560 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:21.665572 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:21.669082 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:11:21.864693 1701586 request.go:629] Waited for 194.149405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:11:21.864788 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:11:21.864819 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:21.864829 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:21.864844 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:21.867269 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:21.868244 1701586 pod_ready.go:92] pod "kube-proxy-5f79r" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:21.868266 1701586 pod_ready.go:81] duration metric: took 400.128672ms for pod "kube-proxy-5f79r" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:21.868299 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6hlpp" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:22.065329 1701586 request.go:629] Waited for 196.949283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6hlpp
	I0420 01:11:22.065426 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6hlpp
	I0420 01:11:22.065456 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:22.065475 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:22.065485 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:22.068329 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:22.265373 1701586 request.go:629] Waited for 196.361477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:11:22.265433 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:11:22.265439 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:22.265447 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:22.265455 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:22.268272 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:22.268851 1701586 pod_ready.go:92] pod "kube-proxy-6hlpp" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:22.268875 1701586 pod_ready.go:81] duration metric: took 400.553161ms for pod "kube-proxy-6hlpp" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:22.268887 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f26nw" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:22.465385 1701586 request.go:629] Waited for 196.400598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f26nw
	I0420 01:11:22.465444 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f26nw
	I0420 01:11:22.465454 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:22.465464 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:22.465473 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:22.468404 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:22.665620 1701586 request.go:629] Waited for 196.344747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:11:22.665715 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:11:22.665728 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:22.665740 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:22.665750 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:22.668395 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:22.669178 1701586 pod_ready.go:92] pod "kube-proxy-f26nw" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:22.669202 1701586 pod_ready.go:81] duration metric: took 400.277952ms for pod "kube-proxy-f26nw" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:22.669214 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pstnt" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:22.865607 1701586 request.go:629] Waited for 196.33047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pstnt
	I0420 01:11:22.865667 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pstnt
	I0420 01:11:22.865674 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:22.865688 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:22.865694 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:22.869938 1701586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 01:11:23.065337 1701586 request.go:629] Waited for 194.344591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256-m03
	I0420 01:11:23.065397 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m03
	I0420 01:11:23.065404 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:23.065412 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:23.065420 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:23.068083 1701586 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0420 01:11:23.068237 1701586 pod_ready.go:97] node "ha-159256-m03" hosting pod "kube-proxy-pstnt" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-159256-m03": nodes "ha-159256-m03" not found
	I0420 01:11:23.068256 1701586 pod_ready.go:81] duration metric: took 399.034804ms for pod "kube-proxy-pstnt" in "kube-system" namespace to be "Ready" ...
	E0420 01:11:23.068271 1701586 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-159256-m03" hosting pod "kube-proxy-pstnt" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-159256-m03": nodes "ha-159256-m03" not found
	I0420 01:11:23.068280 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-159256" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:23.265569 1701586 request.go:629] Waited for 197.192977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-159256
	I0420 01:11:23.265650 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-159256
	I0420 01:11:23.265656 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:23.265666 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:23.265676 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:23.268649 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:23.465510 1701586 request.go:629] Waited for 196.329953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:11:23.465638 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:11:23.465649 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:23.465660 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:23.465703 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:23.468568 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:23.469333 1701586 pod_ready.go:92] pod "kube-scheduler-ha-159256" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:23.469356 1701586 pod_ready.go:81] duration metric: took 401.062701ms for pod "kube-scheduler-ha-159256" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:23.469368 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:23.665242 1701586 request.go:629] Waited for 195.792755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-159256-m02
	I0420 01:11:23.665332 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-159256-m02
	I0420 01:11:23.665346 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:23.665356 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:23.665370 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:23.668563 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:11:23.865187 1701586 request.go:629] Waited for 195.393004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:11:23.865286 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:11:23.865300 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:23.865310 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:23.865319 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:23.868253 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:11:23.868921 1701586 pod_ready.go:92] pod "kube-scheduler-ha-159256-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 01:11:23.868971 1701586 pod_ready.go:81] duration metric: took 399.594575ms for pod "kube-scheduler-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:23.868988 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-159256-m03" in "kube-system" namespace to be "Ready" ...
	I0420 01:11:24.065419 1701586 request.go:629] Waited for 196.362634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-159256-m03
	I0420 01:11:24.065520 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-159256-m03
	I0420 01:11:24.065559 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:24.065567 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:24.065572 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:24.069026 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:11:24.265110 1701586 request.go:629] Waited for 195.134337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256-m03
	I0420 01:11:24.265224 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m03
	I0420 01:11:24.265256 1701586 round_trippers.go:469] Request Headers:
	I0420 01:11:24.265285 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:11:24.265309 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:11:24.269420 1701586 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I0420 01:11:24.269784 1701586 pod_ready.go:97] node "ha-159256-m03" hosting pod "kube-scheduler-ha-159256-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-159256-m03": nodes "ha-159256-m03" not found
	I0420 01:11:24.269865 1701586 pod_ready.go:81] duration metric: took 400.866965ms for pod "kube-scheduler-ha-159256-m03" in "kube-system" namespace to be "Ready" ...
	E0420 01:11:24.269891 1701586 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-159256-m03" hosting pod "kube-scheduler-ha-159256-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-159256-m03": nodes "ha-159256-m03" not found
	I0420 01:11:24.269926 1701586 pod_ready.go:38] duration metric: took 5.405362956s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:11:24.269960 1701586 api_server.go:52] waiting for apiserver process to appear ...
	I0420 01:11:24.270053 1701586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:11:24.282814 1701586 api_server.go:72] duration metric: took 28.733861634s to wait for apiserver process to appear ...
	I0420 01:11:24.282914 1701586 api_server.go:88] waiting for apiserver healthz status ...
	I0420 01:11:24.282955 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:24.291841 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:24.291866 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:24.783623 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:24.795272 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:24.795302 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:25.283933 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:25.295201 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:25.295246 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:25.783858 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:25.792088 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:25.792119 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:26.283291 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:26.291025 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:26.291055 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:26.783592 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:26.795439 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:26.795472 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:27.283924 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:27.292464 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:27.292503 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:27.783645 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:27.794242 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:27.794275 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:28.283927 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:28.292794 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:28.292822 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:28.783357 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:28.791014 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:28.791046 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:29.283673 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:29.291426 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:29.291454 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:29.784065 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:29.792653 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:29.792703 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:30.283094 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:30.292230 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:30.292275 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:30.783974 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:30.792278 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:30.792321 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:31.283906 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:31.292747 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:31.292781 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:31.783582 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:31.791180 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:31.791231 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:32.283385 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:32.297461 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:32.297525 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:32.783149 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:32.791124 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:32.791157 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:33.283422 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:33.291166 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:33.291198 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:33.783752 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:33.791439 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:33.791467 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:34.283768 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:34.291722 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:34.291758 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:34.783178 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:34.790755 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:34.790790 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:35.283071 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:35.291928 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:35.291961 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:35.783228 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:35.790884 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:35.790914 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:36.283100 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:36.290999 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:36.291028 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:36.783496 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:36.791742 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:36.791777 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:37.283130 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:37.291085 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:37.291130 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:37.783666 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:37.795919 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:37.795961 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:38.283593 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:38.291422 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:38.291461 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:38.783060 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:38.791111 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:38.791142 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:39.283660 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:39.291370 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:39.291397 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:39.783666 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:39.803558 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:39.803588 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:40.283150 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:40.292309 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:40.292345 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:40.783774 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:40.794674 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:40.794702 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:41.283064 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:41.294584 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:41.294633 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:41.783065 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:41.802038 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:41.802070 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:42.283473 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:42.292590 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:42.292621 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:42.783074 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:42.790614 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:42.790649 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:43.283115 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:43.290892 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:43.290936 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:43.783488 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:43.792427 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:43.792456 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:44.283084 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:44.290830 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:44.290874 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:44.783904 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:44.793821 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:44.793856 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:45.283159 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:45.291412 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:45.291443 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:45.783929 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:45.806941 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:45.806975 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:46.283154 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:46.290771 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:46.290800 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:46.783396 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:46.791574 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:46.791628 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:47.283101 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:47.290983 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:47.291014 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:47.783093 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:47.790836 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:47.790867 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:48.283085 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:48.290814 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:48.290843 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:48.783098 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:48.791837 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:48.791875 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:49.283012 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:49.290705 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:49.290749 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:49.783145 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:49.790882 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:49.790914 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:50.283081 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:50.290725 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:50.290755 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:50.783347 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:50.790931 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:50.790960 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:51.283643 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:51.291438 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:51.291479 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:51.783205 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:51.790836 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:51.790867 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:52.283079 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:52.290810 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:52.290840 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:52.783071 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:52.802354 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:52.802386 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:53.284011 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:53.291740 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:53.291781 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:53.783028 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:53.790726 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:53.790754 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:54.283649 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:54.291464 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:54.291494 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:54.783033 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:54.790956 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:54.790985 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:55.283073 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:55.292051 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:55.292084 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:55.783578 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:11:55.783684 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:11:55.831232 1701586 cri.go:89] found id: "9affaae1e7152a285b80ac62dbc720061d92d3dede04b7b8cfe0a7adb3239283"
	I0420 01:11:55.831254 1701586 cri.go:89] found id: "2279aacc8b049173ca5a5382470e5df030954ebd56445749743d6b0903e53c64"
	I0420 01:11:55.831259 1701586 cri.go:89] found id: ""
	I0420 01:11:55.831267 1701586 logs.go:276] 2 containers: [9affaae1e7152a285b80ac62dbc720061d92d3dede04b7b8cfe0a7adb3239283 2279aacc8b049173ca5a5382470e5df030954ebd56445749743d6b0903e53c64]
	I0420 01:11:55.831336 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:55.834823 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:55.838183 1701586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:11:55.838248 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:11:55.886725 1701586 cri.go:89] found id: "d537cc9708d3a46da837359173b0e29ed28fd6ecab263b8d7eec9a727968d5f6"
	I0420 01:11:55.886748 1701586 cri.go:89] found id: "7e7081f8c419cc4fb62c9f013d468084039a2a91e73b9d5d4b42da816b26c0ef"
	I0420 01:11:55.886753 1701586 cri.go:89] found id: ""
	I0420 01:11:55.886761 1701586 logs.go:276] 2 containers: [d537cc9708d3a46da837359173b0e29ed28fd6ecab263b8d7eec9a727968d5f6 7e7081f8c419cc4fb62c9f013d468084039a2a91e73b9d5d4b42da816b26c0ef]
	I0420 01:11:55.886816 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:55.890750 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:55.895684 1701586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:11:55.895750 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:11:55.939066 1701586 cri.go:89] found id: ""
	I0420 01:11:55.939088 1701586 logs.go:276] 0 containers: []
	W0420 01:11:55.939107 1701586 logs.go:278] No container was found matching "coredns"
	I0420 01:11:55.939114 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:11:55.939169 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:11:55.979506 1701586 cri.go:89] found id: "b2acca3a334698788b5d42948316e127fbefc07642d8e97dd59bf510e8dbe48a"
	I0420 01:11:55.979527 1701586 cri.go:89] found id: "30e51019ad13a6f89bce9e4ade84c880eae19d004a047fd0109269e298e9c029"
	I0420 01:11:55.979532 1701586 cri.go:89] found id: ""
	I0420 01:11:55.979540 1701586 logs.go:276] 2 containers: [b2acca3a334698788b5d42948316e127fbefc07642d8e97dd59bf510e8dbe48a 30e51019ad13a6f89bce9e4ade84c880eae19d004a047fd0109269e298e9c029]
	I0420 01:11:55.979648 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:55.983162 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:55.986600 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:11:55.986670 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:11:56.031883 1701586 cri.go:89] found id: "6d26959f6431c829f300dfaa5736fe0cd5607631d6308696019617456fc1cc1c"
	I0420 01:11:56.031906 1701586 cri.go:89] found id: ""
	I0420 01:11:56.031913 1701586 logs.go:276] 1 containers: [6d26959f6431c829f300dfaa5736fe0cd5607631d6308696019617456fc1cc1c]
	I0420 01:11:56.031996 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:56.035605 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:11:56.035720 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:11:56.077855 1701586 cri.go:89] found id: "a8218a83021afbe21ffd01d2a872cc153f269b5cc1c01fdda0aaa409a145c4ae"
	I0420 01:11:56.077879 1701586 cri.go:89] found id: "9c0e729894c85443bec0f9569be7353ae4a867867401662b1e28624b407287ae"
	I0420 01:11:56.077885 1701586 cri.go:89] found id: ""
	I0420 01:11:56.077893 1701586 logs.go:276] 2 containers: [a8218a83021afbe21ffd01d2a872cc153f269b5cc1c01fdda0aaa409a145c4ae 9c0e729894c85443bec0f9569be7353ae4a867867401662b1e28624b407287ae]
	I0420 01:11:56.077964 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:56.081448 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:56.084878 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:11:56.084948 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:11:56.125632 1701586 cri.go:89] found id: "df96881d69f56dc08011be8e35e8c2693840a007f6beebe26ba22d459b064e94"
	I0420 01:11:56.125652 1701586 cri.go:89] found id: ""
	I0420 01:11:56.125660 1701586 logs.go:276] 1 containers: [df96881d69f56dc08011be8e35e8c2693840a007f6beebe26ba22d459b064e94]
	I0420 01:11:56.125713 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:56.129507 1701586 logs.go:123] Gathering logs for kube-controller-manager [9c0e729894c85443bec0f9569be7353ae4a867867401662b1e28624b407287ae] ...
	I0420 01:11:56.129556 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c0e729894c85443bec0f9569be7353ae4a867867401662b1e28624b407287ae"
	I0420 01:11:56.166086 1701586 logs.go:123] Gathering logs for container status ...
	I0420 01:11:56.166117 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:11:56.213132 1701586 logs.go:123] Gathering logs for kube-scheduler [30e51019ad13a6f89bce9e4ade84c880eae19d004a047fd0109269e298e9c029] ...
	I0420 01:11:56.213161 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30e51019ad13a6f89bce9e4ade84c880eae19d004a047fd0109269e298e9c029"
	I0420 01:11:56.253036 1701586 logs.go:123] Gathering logs for kube-controller-manager [a8218a83021afbe21ffd01d2a872cc153f269b5cc1c01fdda0aaa409a145c4ae] ...
	I0420 01:11:56.253072 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8218a83021afbe21ffd01d2a872cc153f269b5cc1c01fdda0aaa409a145c4ae"
	I0420 01:11:56.328556 1701586 logs.go:123] Gathering logs for kube-apiserver [9affaae1e7152a285b80ac62dbc720061d92d3dede04b7b8cfe0a7adb3239283] ...
	I0420 01:11:56.328596 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9affaae1e7152a285b80ac62dbc720061d92d3dede04b7b8cfe0a7adb3239283"
	I0420 01:11:56.378835 1701586 logs.go:123] Gathering logs for kube-apiserver [2279aacc8b049173ca5a5382470e5df030954ebd56445749743d6b0903e53c64] ...
	I0420 01:11:56.378872 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2279aacc8b049173ca5a5382470e5df030954ebd56445749743d6b0903e53c64"
	I0420 01:11:56.420867 1701586 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:11:56.420899 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:11:56.754563 1701586 logs.go:123] Gathering logs for kube-scheduler [b2acca3a334698788b5d42948316e127fbefc07642d8e97dd59bf510e8dbe48a] ...
	I0420 01:11:56.754600 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2acca3a334698788b5d42948316e127fbefc07642d8e97dd59bf510e8dbe48a"
	I0420 01:11:56.808185 1701586 logs.go:123] Gathering logs for kube-proxy [6d26959f6431c829f300dfaa5736fe0cd5607631d6308696019617456fc1cc1c] ...
	I0420 01:11:56.808216 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d26959f6431c829f300dfaa5736fe0cd5607631d6308696019617456fc1cc1c"
	I0420 01:11:56.880384 1701586 logs.go:123] Gathering logs for kubelet ...
	I0420 01:11:56.880412 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:11:56.960551 1701586 logs.go:123] Gathering logs for dmesg ...
	I0420 01:11:56.960648 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:11:56.984695 1701586 logs.go:123] Gathering logs for kindnet [df96881d69f56dc08011be8e35e8c2693840a007f6beebe26ba22d459b064e94] ...
	I0420 01:11:56.984767 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df96881d69f56dc08011be8e35e8c2693840a007f6beebe26ba22d459b064e94"
	I0420 01:11:57.036988 1701586 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:11:57.037019 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:11:57.114130 1701586 logs.go:123] Gathering logs for etcd [d537cc9708d3a46da837359173b0e29ed28fd6ecab263b8d7eec9a727968d5f6] ...
	I0420 01:11:57.114167 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d537cc9708d3a46da837359173b0e29ed28fd6ecab263b8d7eec9a727968d5f6"
	I0420 01:11:57.173508 1701586 logs.go:123] Gathering logs for etcd [7e7081f8c419cc4fb62c9f013d468084039a2a91e73b9d5d4b42da816b26c0ef] ...
	I0420 01:11:57.173821 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7081f8c419cc4fb62c9f013d468084039a2a91e73b9d5d4b42da816b26c0ef"
	I0420 01:11:59.754791 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:11:59.762549 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0420 01:11:59.762594 1701586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0420 01:11:59.762648 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:11:59.762734 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:11:59.806485 1701586 cri.go:89] found id: "9affaae1e7152a285b80ac62dbc720061d92d3dede04b7b8cfe0a7adb3239283"
	I0420 01:11:59.806507 1701586 cri.go:89] found id: "2279aacc8b049173ca5a5382470e5df030954ebd56445749743d6b0903e53c64"
	I0420 01:11:59.806512 1701586 cri.go:89] found id: ""
	I0420 01:11:59.806519 1701586 logs.go:276] 2 containers: [9affaae1e7152a285b80ac62dbc720061d92d3dede04b7b8cfe0a7adb3239283 2279aacc8b049173ca5a5382470e5df030954ebd56445749743d6b0903e53c64]
	I0420 01:11:59.806573 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:59.810188 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:59.813518 1701586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:11:59.813629 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:11:59.852675 1701586 cri.go:89] found id: "d537cc9708d3a46da837359173b0e29ed28fd6ecab263b8d7eec9a727968d5f6"
	I0420 01:11:59.852698 1701586 cri.go:89] found id: "7e7081f8c419cc4fb62c9f013d468084039a2a91e73b9d5d4b42da816b26c0ef"
	I0420 01:11:59.852704 1701586 cri.go:89] found id: ""
	I0420 01:11:59.852711 1701586 logs.go:276] 2 containers: [d537cc9708d3a46da837359173b0e29ed28fd6ecab263b8d7eec9a727968d5f6 7e7081f8c419cc4fb62c9f013d468084039a2a91e73b9d5d4b42da816b26c0ef]
	I0420 01:11:59.852771 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:59.856573 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:59.861333 1701586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:11:59.861407 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:11:59.910577 1701586 cri.go:89] found id: ""
	I0420 01:11:59.910656 1701586 logs.go:276] 0 containers: []
	W0420 01:11:59.910679 1701586 logs.go:278] No container was found matching "coredns"
	I0420 01:11:59.910703 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:11:59.910780 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:11:59.952625 1701586 cri.go:89] found id: "b2acca3a334698788b5d42948316e127fbefc07642d8e97dd59bf510e8dbe48a"
	I0420 01:11:59.952695 1701586 cri.go:89] found id: "30e51019ad13a6f89bce9e4ade84c880eae19d004a047fd0109269e298e9c029"
	I0420 01:11:59.952715 1701586 cri.go:89] found id: ""
	I0420 01:11:59.952742 1701586 logs.go:276] 2 containers: [b2acca3a334698788b5d42948316e127fbefc07642d8e97dd59bf510e8dbe48a 30e51019ad13a6f89bce9e4ade84c880eae19d004a047fd0109269e298e9c029]
	I0420 01:11:59.952826 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:59.957923 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:11:59.961993 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:11:59.962104 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:12:00.004991 1701586 cri.go:89] found id: "6d26959f6431c829f300dfaa5736fe0cd5607631d6308696019617456fc1cc1c"
	I0420 01:12:00.005014 1701586 cri.go:89] found id: ""
	I0420 01:12:00.005023 1701586 logs.go:276] 1 containers: [6d26959f6431c829f300dfaa5736fe0cd5607631d6308696019617456fc1cc1c]
	I0420 01:12:00.005091 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:00.010362 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:12:00.010448 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:12:00.144541 1701586 cri.go:89] found id: "a8218a83021afbe21ffd01d2a872cc153f269b5cc1c01fdda0aaa409a145c4ae"
	I0420 01:12:00.144564 1701586 cri.go:89] found id: "9c0e729894c85443bec0f9569be7353ae4a867867401662b1e28624b407287ae"
	I0420 01:12:00.144570 1701586 cri.go:89] found id: ""
	I0420 01:12:00.144579 1701586 logs.go:276] 2 containers: [a8218a83021afbe21ffd01d2a872cc153f269b5cc1c01fdda0aaa409a145c4ae 9c0e729894c85443bec0f9569be7353ae4a867867401662b1e28624b407287ae]
	I0420 01:12:00.144650 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:00.150882 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:00.158533 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:12:00.158718 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:12:00.305450 1701586 cri.go:89] found id: "df96881d69f56dc08011be8e35e8c2693840a007f6beebe26ba22d459b064e94"
	I0420 01:12:00.305547 1701586 cri.go:89] found id: ""
	I0420 01:12:00.305572 1701586 logs.go:276] 1 containers: [df96881d69f56dc08011be8e35e8c2693840a007f6beebe26ba22d459b064e94]
	I0420 01:12:00.305677 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:00.345268 1701586 logs.go:123] Gathering logs for kube-apiserver [9affaae1e7152a285b80ac62dbc720061d92d3dede04b7b8cfe0a7adb3239283] ...
	I0420 01:12:00.345359 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9affaae1e7152a285b80ac62dbc720061d92d3dede04b7b8cfe0a7adb3239283"
	I0420 01:12:00.507240 1701586 logs.go:123] Gathering logs for kube-scheduler [b2acca3a334698788b5d42948316e127fbefc07642d8e97dd59bf510e8dbe48a] ...
	I0420 01:12:00.507279 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2acca3a334698788b5d42948316e127fbefc07642d8e97dd59bf510e8dbe48a"
	I0420 01:12:00.581390 1701586 logs.go:123] Gathering logs for kube-proxy [6d26959f6431c829f300dfaa5736fe0cd5607631d6308696019617456fc1cc1c] ...
	I0420 01:12:00.581430 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d26959f6431c829f300dfaa5736fe0cd5607631d6308696019617456fc1cc1c"
	I0420 01:12:00.640171 1701586 logs.go:123] Gathering logs for kube-controller-manager [a8218a83021afbe21ffd01d2a872cc153f269b5cc1c01fdda0aaa409a145c4ae] ...
	I0420 01:12:00.640204 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8218a83021afbe21ffd01d2a872cc153f269b5cc1c01fdda0aaa409a145c4ae"
	I0420 01:12:00.766876 1701586 logs.go:123] Gathering logs for kubelet ...
	I0420 01:12:00.766918 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:12:00.839512 1701586 logs.go:123] Gathering logs for dmesg ...
	I0420 01:12:00.839594 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:12:00.867390 1701586 logs.go:123] Gathering logs for kube-apiserver [2279aacc8b049173ca5a5382470e5df030954ebd56445749743d6b0903e53c64] ...
	I0420 01:12:00.867465 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2279aacc8b049173ca5a5382470e5df030954ebd56445749743d6b0903e53c64"
	I0420 01:12:00.962834 1701586 logs.go:123] Gathering logs for etcd [d537cc9708d3a46da837359173b0e29ed28fd6ecab263b8d7eec9a727968d5f6] ...
	I0420 01:12:00.962907 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d537cc9708d3a46da837359173b0e29ed28fd6ecab263b8d7eec9a727968d5f6"
	I0420 01:12:01.027442 1701586 logs.go:123] Gathering logs for etcd [7e7081f8c419cc4fb62c9f013d468084039a2a91e73b9d5d4b42da816b26c0ef] ...
	I0420 01:12:01.027519 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7081f8c419cc4fb62c9f013d468084039a2a91e73b9d5d4b42da816b26c0ef"
	I0420 01:12:01.095198 1701586 logs.go:123] Gathering logs for kube-scheduler [30e51019ad13a6f89bce9e4ade84c880eae19d004a047fd0109269e298e9c029] ...
	I0420 01:12:01.095233 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30e51019ad13a6f89bce9e4ade84c880eae19d004a047fd0109269e298e9c029"
	I0420 01:12:01.139515 1701586 logs.go:123] Gathering logs for kindnet [df96881d69f56dc08011be8e35e8c2693840a007f6beebe26ba22d459b064e94] ...
	I0420 01:12:01.139562 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df96881d69f56dc08011be8e35e8c2693840a007f6beebe26ba22d459b064e94"
	I0420 01:12:01.190019 1701586 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:12:01.190050 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:12:01.268267 1701586 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:12:01.268303 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:12:01.591566 1701586 logs.go:123] Gathering logs for kube-controller-manager [9c0e729894c85443bec0f9569be7353ae4a867867401662b1e28624b407287ae] ...
	I0420 01:12:01.591601 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c0e729894c85443bec0f9569be7353ae4a867867401662b1e28624b407287ae"
	I0420 01:12:01.645837 1701586 logs.go:123] Gathering logs for container status ...
	I0420 01:12:01.645873 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:12:04.196430 1701586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0420 01:12:04.207026 1701586 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0420 01:12:04.207107 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0420 01:12:04.207113 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:04.207122 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:04.207127 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:04.220182 1701586 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0420 01:12:04.220297 1701586 api_server.go:141] control plane version: v1.30.0
	I0420 01:12:04.220313 1701586 api_server.go:131] duration metric: took 39.937374196s to wait for apiserver health ...
	I0420 01:12:04.220321 1701586 system_pods.go:43] waiting for kube-system pods to appear ...
	I0420 01:12:04.220343 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0420 01:12:04.220405 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0420 01:12:04.262147 1701586 cri.go:89] found id: "9affaae1e7152a285b80ac62dbc720061d92d3dede04b7b8cfe0a7adb3239283"
	I0420 01:12:04.262168 1701586 cri.go:89] found id: "2279aacc8b049173ca5a5382470e5df030954ebd56445749743d6b0903e53c64"
	I0420 01:12:04.262172 1701586 cri.go:89] found id: ""
	I0420 01:12:04.262180 1701586 logs.go:276] 2 containers: [9affaae1e7152a285b80ac62dbc720061d92d3dede04b7b8cfe0a7adb3239283 2279aacc8b049173ca5a5382470e5df030954ebd56445749743d6b0903e53c64]
	I0420 01:12:04.262239 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:04.266115 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:04.269491 1701586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0420 01:12:04.269595 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0420 01:12:04.310533 1701586 cri.go:89] found id: "d537cc9708d3a46da837359173b0e29ed28fd6ecab263b8d7eec9a727968d5f6"
	I0420 01:12:04.310559 1701586 cri.go:89] found id: "7e7081f8c419cc4fb62c9f013d468084039a2a91e73b9d5d4b42da816b26c0ef"
	I0420 01:12:04.310565 1701586 cri.go:89] found id: ""
	I0420 01:12:04.310573 1701586 logs.go:276] 2 containers: [d537cc9708d3a46da837359173b0e29ed28fd6ecab263b8d7eec9a727968d5f6 7e7081f8c419cc4fb62c9f013d468084039a2a91e73b9d5d4b42da816b26c0ef]
	I0420 01:12:04.310630 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:04.314441 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:04.317758 1701586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0420 01:12:04.317884 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0420 01:12:04.355843 1701586 cri.go:89] found id: ""
	I0420 01:12:04.355916 1701586 logs.go:276] 0 containers: []
	W0420 01:12:04.355943 1701586 logs.go:278] No container was found matching "coredns"
	I0420 01:12:04.355977 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0420 01:12:04.356090 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0420 01:12:04.402965 1701586 cri.go:89] found id: "b2acca3a334698788b5d42948316e127fbefc07642d8e97dd59bf510e8dbe48a"
	I0420 01:12:04.402989 1701586 cri.go:89] found id: "30e51019ad13a6f89bce9e4ade84c880eae19d004a047fd0109269e298e9c029"
	I0420 01:12:04.402994 1701586 cri.go:89] found id: ""
	I0420 01:12:04.403002 1701586 logs.go:276] 2 containers: [b2acca3a334698788b5d42948316e127fbefc07642d8e97dd59bf510e8dbe48a 30e51019ad13a6f89bce9e4ade84c880eae19d004a047fd0109269e298e9c029]
	I0420 01:12:04.403065 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:04.407011 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:04.410999 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0420 01:12:04.411074 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0420 01:12:04.456390 1701586 cri.go:89] found id: "6d26959f6431c829f300dfaa5736fe0cd5607631d6308696019617456fc1cc1c"
	I0420 01:12:04.456413 1701586 cri.go:89] found id: ""
	I0420 01:12:04.456421 1701586 logs.go:276] 1 containers: [6d26959f6431c829f300dfaa5736fe0cd5607631d6308696019617456fc1cc1c]
	I0420 01:12:04.456501 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:04.460740 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0420 01:12:04.460842 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0420 01:12:04.507206 1701586 cri.go:89] found id: "a8218a83021afbe21ffd01d2a872cc153f269b5cc1c01fdda0aaa409a145c4ae"
	I0420 01:12:04.507275 1701586 cri.go:89] found id: "9c0e729894c85443bec0f9569be7353ae4a867867401662b1e28624b407287ae"
	I0420 01:12:04.507294 1701586 cri.go:89] found id: ""
	I0420 01:12:04.507309 1701586 logs.go:276] 2 containers: [a8218a83021afbe21ffd01d2a872cc153f269b5cc1c01fdda0aaa409a145c4ae 9c0e729894c85443bec0f9569be7353ae4a867867401662b1e28624b407287ae]
	I0420 01:12:04.507372 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:04.510864 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:04.514220 1701586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0420 01:12:04.514290 1701586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0420 01:12:04.554873 1701586 cri.go:89] found id: "df96881d69f56dc08011be8e35e8c2693840a007f6beebe26ba22d459b064e94"
	I0420 01:12:04.554941 1701586 cri.go:89] found id: ""
	I0420 01:12:04.554956 1701586 logs.go:276] 1 containers: [df96881d69f56dc08011be8e35e8c2693840a007f6beebe26ba22d459b064e94]
	I0420 01:12:04.555012 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:04.558564 1701586 logs.go:123] Gathering logs for describe nodes ...
	I0420 01:12:04.558589 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0420 01:12:04.809030 1701586 logs.go:123] Gathering logs for kube-apiserver [9affaae1e7152a285b80ac62dbc720061d92d3dede04b7b8cfe0a7adb3239283] ...
	I0420 01:12:04.809067 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9affaae1e7152a285b80ac62dbc720061d92d3dede04b7b8cfe0a7adb3239283"
	I0420 01:12:04.861137 1701586 logs.go:123] Gathering logs for etcd [7e7081f8c419cc4fb62c9f013d468084039a2a91e73b9d5d4b42da816b26c0ef] ...
	I0420 01:12:04.861170 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e7081f8c419cc4fb62c9f013d468084039a2a91e73b9d5d4b42da816b26c0ef"
	I0420 01:12:04.935064 1701586 logs.go:123] Gathering logs for kube-scheduler [30e51019ad13a6f89bce9e4ade84c880eae19d004a047fd0109269e298e9c029] ...
	I0420 01:12:04.935099 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30e51019ad13a6f89bce9e4ade84c880eae19d004a047fd0109269e298e9c029"
	I0420 01:12:04.975376 1701586 logs.go:123] Gathering logs for kubelet ...
	I0420 01:12:04.975406 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0420 01:12:05.030123 1701586 logs.go:123] Gathering logs for etcd [d537cc9708d3a46da837359173b0e29ed28fd6ecab263b8d7eec9a727968d5f6] ...
	I0420 01:12:05.030160 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d537cc9708d3a46da837359173b0e29ed28fd6ecab263b8d7eec9a727968d5f6"
	I0420 01:12:05.088645 1701586 logs.go:123] Gathering logs for kube-scheduler [b2acca3a334698788b5d42948316e127fbefc07642d8e97dd59bf510e8dbe48a] ...
	I0420 01:12:05.088713 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2acca3a334698788b5d42948316e127fbefc07642d8e97dd59bf510e8dbe48a"
	I0420 01:12:05.129787 1701586 logs.go:123] Gathering logs for kube-controller-manager [a8218a83021afbe21ffd01d2a872cc153f269b5cc1c01fdda0aaa409a145c4ae] ...
	I0420 01:12:05.129834 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8218a83021afbe21ffd01d2a872cc153f269b5cc1c01fdda0aaa409a145c4ae"
	I0420 01:12:05.185506 1701586 logs.go:123] Gathering logs for kube-proxy [6d26959f6431c829f300dfaa5736fe0cd5607631d6308696019617456fc1cc1c] ...
	I0420 01:12:05.185617 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d26959f6431c829f300dfaa5736fe0cd5607631d6308696019617456fc1cc1c"
	I0420 01:12:05.228111 1701586 logs.go:123] Gathering logs for kube-controller-manager [9c0e729894c85443bec0f9569be7353ae4a867867401662b1e28624b407287ae] ...
	I0420 01:12:05.228142 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c0e729894c85443bec0f9569be7353ae4a867867401662b1e28624b407287ae"
	I0420 01:12:05.263507 1701586 logs.go:123] Gathering logs for kindnet [df96881d69f56dc08011be8e35e8c2693840a007f6beebe26ba22d459b064e94] ...
	I0420 01:12:05.263536 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df96881d69f56dc08011be8e35e8c2693840a007f6beebe26ba22d459b064e94"
	I0420 01:12:05.311531 1701586 logs.go:123] Gathering logs for CRI-O ...
	I0420 01:12:05.311559 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0420 01:12:05.383919 1701586 logs.go:123] Gathering logs for dmesg ...
	I0420 01:12:05.383957 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0420 01:12:05.406780 1701586 logs.go:123] Gathering logs for kube-apiserver [2279aacc8b049173ca5a5382470e5df030954ebd56445749743d6b0903e53c64] ...
	I0420 01:12:05.406814 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2279aacc8b049173ca5a5382470e5df030954ebd56445749743d6b0903e53c64"
	I0420 01:12:05.455371 1701586 logs.go:123] Gathering logs for container status ...
	I0420 01:12:05.455399 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0420 01:12:08.002088 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0420 01:12:08.002116 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:08.002126 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:08.002130 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:08.011816 1701586 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0420 01:12:08.021050 1701586 system_pods.go:59] 26 kube-system pods found
	I0420 01:12:08.021087 1701586 system_pods.go:61] "coredns-7db6d8ff4d-f6s2n" [5d0a310f-6030-4129-987a-7bb741793024] Running
	I0420 01:12:08.021093 1701586 system_pods.go:61] "coredns-7db6d8ff4d-h2b7f" [c91d3f51-7652-48ba-b475-f28672554fe9] Running
	I0420 01:12:08.021098 1701586 system_pods.go:61] "etcd-ha-159256" [ea9416cc-3861-4564-a5c1-484a9e500178] Running
	I0420 01:12:08.021168 1701586 system_pods.go:61] "etcd-ha-159256-m02" [389371eb-291f-4a78-808e-7c50fb519fff] Running
	I0420 01:12:08.021184 1701586 system_pods.go:61] "etcd-ha-159256-m03" [233a6573-5597-4bd3-8c20-f38c0bb41659] Running
	I0420 01:12:08.021215 1701586 system_pods.go:61] "kindnet-gf5zb" [f61dbb3f-bee7-423e-b469-6a9efe744682] Running
	I0420 01:12:08.021226 1701586 system_pods.go:61] "kindnet-nfg5r" [364685a6-5023-47f1-b0d4-2fe5f644c40e] Running
	I0420 01:12:08.021230 1701586 system_pods.go:61] "kindnet-x7psn" [f9abdf43-720b-49d5-a11d-6a2276d1f3f8] Running
	I0420 01:12:08.021246 1701586 system_pods.go:61] "kindnet-zcfdw" [7c5ae146-052a-4ac5-8eec-ca4d8a50ed15] Running
	I0420 01:12:08.021261 1701586 system_pods.go:61] "kube-apiserver-ha-159256" [87ab5444-57a2-4089-857f-c9c154b8348c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:12:08.021267 1701586 system_pods.go:61] "kube-apiserver-ha-159256-m02" [02452df2-f4ab-48db-89bc-1d06f47f27cf] Running
	I0420 01:12:08.021290 1701586 system_pods.go:61] "kube-apiserver-ha-159256-m03" [618fc15c-45a0-4540-8828-a57819902ff9] Running
	I0420 01:12:08.021305 1701586 system_pods.go:61] "kube-controller-manager-ha-159256" [4eac3f49-63b9-4515-9d7f-e3d94323e326] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:12:08.021311 1701586 system_pods.go:61] "kube-controller-manager-ha-159256-m02" [b23031bf-9a29-4e56-b45c-0446301f0651] Running
	I0420 01:12:08.021320 1701586 system_pods.go:61] "kube-controller-manager-ha-159256-m03" [c1578866-d824-4b53-8152-055df0cea62d] Running
	I0420 01:12:08.021324 1701586 system_pods.go:61] "kube-proxy-5f79r" [47e25f27-1f01-44ab-aed7-6a85fdd969d2] Running
	I0420 01:12:08.021327 1701586 system_pods.go:61] "kube-proxy-6hlpp" [8c8445d9-d263-405d-a078-7cec7d9d09d3] Running
	I0420 01:12:08.021331 1701586 system_pods.go:61] "kube-proxy-f26nw" [3494612f-fd0e-4d20-a379-5df83241bb04] Running
	I0420 01:12:08.021338 1701586 system_pods.go:61] "kube-proxy-pstnt" [51c169ba-3581-4c1f-a045-fa2b0a245729] Running
	I0420 01:12:08.021343 1701586 system_pods.go:61] "kube-scheduler-ha-159256" [c501b0a0-2d96-4d22-a832-1c860ec3575f] Running
	I0420 01:12:08.021360 1701586 system_pods.go:61] "kube-scheduler-ha-159256-m02" [7db33b7d-5e96-4851-b006-25b5bcce8fdf] Running
	I0420 01:12:08.021372 1701586 system_pods.go:61] "kube-scheduler-ha-159256-m03" [4d63e05b-89d9-44a3-8643-ce6b6562e144] Running
	I0420 01:12:08.021377 1701586 system_pods.go:61] "kube-vip-ha-159256" [782634f4-72a9-4945-951c-dba16b7b0d28] Running
	I0420 01:12:08.021389 1701586 system_pods.go:61] "kube-vip-ha-159256-m02" [0ed4a91e-44cb-428a-8850-5ffe8e0964e2] Running
	I0420 01:12:08.021393 1701586 system_pods.go:61] "kube-vip-ha-159256-m03" [fd7a1399-832d-497f-916e-a7b364b6f99a] Running
	I0420 01:12:08.021398 1701586 system_pods.go:61] "storage-provisioner" [c734fc70-9fc5-423e-8395-902d3a27627e] Running
	I0420 01:12:08.021405 1701586 system_pods.go:74] duration metric: took 3.801079011s to wait for pod list to return data ...
	I0420 01:12:08.021417 1701586 default_sa.go:34] waiting for default service account to be created ...
	I0420 01:12:08.021515 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0420 01:12:08.021564 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:08.021581 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:08.021586 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:08.024713 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:08.024953 1701586 default_sa.go:45] found service account: "default"
	I0420 01:12:08.024972 1701586 default_sa.go:55] duration metric: took 3.548444ms for default service account to be created ...
	I0420 01:12:08.024988 1701586 system_pods.go:116] waiting for k8s-apps to be running ...
	I0420 01:12:08.025051 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0420 01:12:08.025059 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:08.025067 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:08.025071 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:08.032263 1701586 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0420 01:12:08.041517 1701586 system_pods.go:86] 26 kube-system pods found
	I0420 01:12:08.041617 1701586 system_pods.go:89] "coredns-7db6d8ff4d-f6s2n" [5d0a310f-6030-4129-987a-7bb741793024] Running
	I0420 01:12:08.041625 1701586 system_pods.go:89] "coredns-7db6d8ff4d-h2b7f" [c91d3f51-7652-48ba-b475-f28672554fe9] Running
	I0420 01:12:08.041631 1701586 system_pods.go:89] "etcd-ha-159256" [ea9416cc-3861-4564-a5c1-484a9e500178] Running
	I0420 01:12:08.041635 1701586 system_pods.go:89] "etcd-ha-159256-m02" [389371eb-291f-4a78-808e-7c50fb519fff] Running
	I0420 01:12:08.041639 1701586 system_pods.go:89] "etcd-ha-159256-m03" [233a6573-5597-4bd3-8c20-f38c0bb41659] Running
	I0420 01:12:08.041644 1701586 system_pods.go:89] "kindnet-gf5zb" [f61dbb3f-bee7-423e-b469-6a9efe744682] Running
	I0420 01:12:08.041651 1701586 system_pods.go:89] "kindnet-nfg5r" [364685a6-5023-47f1-b0d4-2fe5f644c40e] Running
	I0420 01:12:08.041662 1701586 system_pods.go:89] "kindnet-x7psn" [f9abdf43-720b-49d5-a11d-6a2276d1f3f8] Running
	I0420 01:12:08.041667 1701586 system_pods.go:89] "kindnet-zcfdw" [7c5ae146-052a-4ac5-8eec-ca4d8a50ed15] Running
	I0420 01:12:08.041678 1701586 system_pods.go:89] "kube-apiserver-ha-159256" [87ab5444-57a2-4089-857f-c9c154b8348c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0420 01:12:08.041685 1701586 system_pods.go:89] "kube-apiserver-ha-159256-m02" [02452df2-f4ab-48db-89bc-1d06f47f27cf] Running
	I0420 01:12:08.041695 1701586 system_pods.go:89] "kube-apiserver-ha-159256-m03" [618fc15c-45a0-4540-8828-a57819902ff9] Running
	I0420 01:12:08.041702 1701586 system_pods.go:89] "kube-controller-manager-ha-159256" [4eac3f49-63b9-4515-9d7f-e3d94323e326] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0420 01:12:08.041712 1701586 system_pods.go:89] "kube-controller-manager-ha-159256-m02" [b23031bf-9a29-4e56-b45c-0446301f0651] Running
	I0420 01:12:08.041717 1701586 system_pods.go:89] "kube-controller-manager-ha-159256-m03" [c1578866-d824-4b53-8152-055df0cea62d] Running
	I0420 01:12:08.041728 1701586 system_pods.go:89] "kube-proxy-5f79r" [47e25f27-1f01-44ab-aed7-6a85fdd969d2] Running
	I0420 01:12:08.041732 1701586 system_pods.go:89] "kube-proxy-6hlpp" [8c8445d9-d263-405d-a078-7cec7d9d09d3] Running
	I0420 01:12:08.041737 1701586 system_pods.go:89] "kube-proxy-f26nw" [3494612f-fd0e-4d20-a379-5df83241bb04] Running
	I0420 01:12:08.041741 1701586 system_pods.go:89] "kube-proxy-pstnt" [51c169ba-3581-4c1f-a045-fa2b0a245729] Running
	I0420 01:12:08.041745 1701586 system_pods.go:89] "kube-scheduler-ha-159256" [c501b0a0-2d96-4d22-a832-1c860ec3575f] Running
	I0420 01:12:08.041753 1701586 system_pods.go:89] "kube-scheduler-ha-159256-m02" [7db33b7d-5e96-4851-b006-25b5bcce8fdf] Running
	I0420 01:12:08.041764 1701586 system_pods.go:89] "kube-scheduler-ha-159256-m03" [4d63e05b-89d9-44a3-8643-ce6b6562e144] Running
	I0420 01:12:08.041769 1701586 system_pods.go:89] "kube-vip-ha-159256" [782634f4-72a9-4945-951c-dba16b7b0d28] Running
	I0420 01:12:08.041773 1701586 system_pods.go:89] "kube-vip-ha-159256-m02" [0ed4a91e-44cb-428a-8850-5ffe8e0964e2] Running
	I0420 01:12:08.041776 1701586 system_pods.go:89] "kube-vip-ha-159256-m03" [fd7a1399-832d-497f-916e-a7b364b6f99a] Running
	I0420 01:12:08.041786 1701586 system_pods.go:89] "storage-provisioner" [c734fc70-9fc5-423e-8395-902d3a27627e] Running
	I0420 01:12:08.041801 1701586 system_pods.go:126] duration metric: took 16.802469ms to wait for k8s-apps to be running ...
	I0420 01:12:08.041814 1701586 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:12:08.041882 1701586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:12:08.058152 1701586 system_svc.go:56] duration metric: took 16.327981ms WaitForService to wait for kubelet
	I0420 01:12:08.058183 1701586 kubeadm.go:576] duration metric: took 1m12.509235968s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:12:08.058204 1701586 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:12:08.058276 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0420 01:12:08.058286 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:08.058295 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:08.058301 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:08.062033 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:08.064106 1701586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0420 01:12:08.064148 1701586 node_conditions.go:123] node cpu capacity is 2
	I0420 01:12:08.064162 1701586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0420 01:12:08.064167 1701586 node_conditions.go:123] node cpu capacity is 2
	I0420 01:12:08.064172 1701586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0420 01:12:08.064176 1701586 node_conditions.go:123] node cpu capacity is 2
	I0420 01:12:08.064181 1701586 node_conditions.go:105] duration metric: took 5.97189ms to run NodePressure ...
	I0420 01:12:08.064193 1701586 start.go:240] waiting for startup goroutines ...
	I0420 01:12:08.064217 1701586 start.go:254] writing updated cluster config ...
	I0420 01:12:08.067467 1701586 out.go:177] 
	I0420 01:12:08.071217 1701586 config.go:182] Loaded profile config "ha-159256": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:12:08.071337 1701586 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/config.json ...
	I0420 01:12:08.074176 1701586 out.go:177] * Starting "ha-159256-m04" worker node in "ha-159256" cluster
	I0420 01:12:08.077241 1701586 cache.go:121] Beginning downloading kic base image for docker with crio
	I0420 01:12:08.079766 1701586 out.go:177] * Pulling base image v0.0.43 ...
	I0420 01:12:08.082244 1701586 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 01:12:08.082275 1701586 cache.go:56] Caching tarball of preloaded images
	I0420 01:12:08.082332 1701586 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local docker daemon
	I0420 01:12:08.082388 1701586 preload.go:173] Found /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0420 01:12:08.082401 1701586 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0420 01:12:08.082528 1701586 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/config.json ...
	I0420 01:12:08.096976 1701586 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local docker daemon, skipping pull
	I0420 01:12:08.097001 1701586 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 exists in daemon, skipping load
	I0420 01:12:08.097024 1701586 cache.go:194] Successfully downloaded all kic artifacts
	I0420 01:12:08.097139 1701586 start.go:360] acquireMachinesLock for ha-159256-m04: {Name:mk664ee356e088e566be90fbe1fabbc9d380806d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0420 01:12:08.097568 1701586 start.go:364] duration metric: took 363.05µs to acquireMachinesLock for "ha-159256-m04"
	I0420 01:12:08.097607 1701586 start.go:96] Skipping create...Using existing machine configuration
	I0420 01:12:08.097613 1701586 fix.go:54] fixHost starting: m04
	I0420 01:12:08.097902 1701586 cli_runner.go:164] Run: docker container inspect ha-159256-m04 --format={{.State.Status}}
	I0420 01:12:08.114324 1701586 fix.go:112] recreateIfNeeded on ha-159256-m04: state=Stopped err=<nil>
	W0420 01:12:08.114352 1701586 fix.go:138] unexpected machine state, will restart: <nil>
	I0420 01:12:08.117687 1701586 out.go:177] * Restarting existing docker container for "ha-159256-m04" ...
	I0420 01:12:08.121132 1701586 cli_runner.go:164] Run: docker start ha-159256-m04
	I0420 01:12:08.432268 1701586 cli_runner.go:164] Run: docker container inspect ha-159256-m04 --format={{.State.Status}}
	I0420 01:12:08.453912 1701586 kic.go:430] container "ha-159256-m04" state is running.
	I0420 01:12:08.454260 1701586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159256-m04
	I0420 01:12:08.472287 1701586 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/config.json ...
	I0420 01:12:08.472715 1701586 machine.go:94] provisionDockerMachine start ...
	I0420 01:12:08.472855 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m04
	I0420 01:12:08.498585 1701586 main.go:141] libmachine: Using SSH client type: native
	I0420 01:12:08.498882 1701586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34745 <nil> <nil>}
	I0420 01:12:08.498895 1701586 main.go:141] libmachine: About to run SSH command:
	hostname
	I0420 01:12:08.499880 1701586 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0420 01:12:11.649016 1701586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-159256-m04
	
	I0420 01:12:11.649051 1701586 ubuntu.go:169] provisioning hostname "ha-159256-m04"
	I0420 01:12:11.649123 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m04
	I0420 01:12:11.669952 1701586 main.go:141] libmachine: Using SSH client type: native
	I0420 01:12:11.670204 1701586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34745 <nil> <nil>}
	I0420 01:12:11.670221 1701586 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-159256-m04 && echo "ha-159256-m04" | sudo tee /etc/hostname
	I0420 01:12:11.827979 1701586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-159256-m04
	
	I0420 01:12:11.828126 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m04
	I0420 01:12:11.844730 1701586 main.go:141] libmachine: Using SSH client type: native
	I0420 01:12:11.844978 1701586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34745 <nil> <nil>}
	I0420 01:12:11.844994 1701586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-159256-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-159256-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-159256-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0420 01:12:11.993111 1701586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0420 01:12:11.993138 1701586 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18703-1638187/.minikube CaCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18703-1638187/.minikube}
	I0420 01:12:11.993202 1701586 ubuntu.go:177] setting up certificates
	I0420 01:12:11.993212 1701586 provision.go:84] configureAuth start
	I0420 01:12:11.993286 1701586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159256-m04
	I0420 01:12:12.013661 1701586 provision.go:143] copyHostCerts
	I0420 01:12:12.013711 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem
	I0420 01:12:12.013746 1701586 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem, removing ...
	I0420 01:12:12.013757 1701586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem
	I0420 01:12:12.013857 1701586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.pem (1082 bytes)
	I0420 01:12:12.013946 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem
	I0420 01:12:12.013969 1701586 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem, removing ...
	I0420 01:12:12.013974 1701586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem
	I0420 01:12:12.014003 1701586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/cert.pem (1123 bytes)
	I0420 01:12:12.014047 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem
	I0420 01:12:12.014067 1701586 exec_runner.go:144] found /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem, removing ...
	I0420 01:12:12.014072 1701586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem
	I0420 01:12:12.014101 1701586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18703-1638187/.minikube/key.pem (1675 bytes)
	I0420 01:12:12.014155 1701586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem org=jenkins.ha-159256-m04 san=[127.0.0.1 192.168.49.5 ha-159256-m04 localhost minikube]
	I0420 01:12:12.177917 1701586 provision.go:177] copyRemoteCerts
	I0420 01:12:12.178012 1701586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0420 01:12:12.178071 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m04
	I0420 01:12:12.198751 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34745 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256-m04/id_rsa Username:docker}
	I0420 01:12:12.302386 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0420 01:12:12.302449 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0420 01:12:12.327671 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0420 01:12:12.327733 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0420 01:12:12.352234 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0420 01:12:12.352302 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0420 01:12:12.385815 1701586 provision.go:87] duration metric: took 392.589095ms to configureAuth
	I0420 01:12:12.385840 1701586 ubuntu.go:193] setting minikube options for container-runtime
	I0420 01:12:12.386066 1701586 config.go:182] Loaded profile config "ha-159256": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:12:12.386171 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m04
	I0420 01:12:12.404330 1701586 main.go:141] libmachine: Using SSH client type: native
	I0420 01:12:12.404591 1701586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34745 <nil> <nil>}
	I0420 01:12:12.404605 1701586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0420 01:12:12.680123 1701586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0420 01:12:12.680150 1701586 machine.go:97] duration metric: took 4.207420944s to provisionDockerMachine
	I0420 01:12:12.680162 1701586 start.go:293] postStartSetup for "ha-159256-m04" (driver="docker")
	I0420 01:12:12.680173 1701586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0420 01:12:12.680246 1701586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0420 01:12:12.680305 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m04
	I0420 01:12:12.695915 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34745 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256-m04/id_rsa Username:docker}
	I0420 01:12:12.803121 1701586 ssh_runner.go:195] Run: cat /etc/os-release
	I0420 01:12:12.806161 1701586 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0420 01:12:12.806198 1701586 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0420 01:12:12.806229 1701586 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0420 01:12:12.806237 1701586 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0420 01:12:12.806248 1701586 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-1638187/.minikube/addons for local assets ...
	I0420 01:12:12.806315 1701586 filesync.go:126] Scanning /home/jenkins/minikube-integration/18703-1638187/.minikube/files for local assets ...
	I0420 01:12:12.806396 1701586 filesync.go:149] local asset: /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem -> 16436232.pem in /etc/ssl/certs
	I0420 01:12:12.806408 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem -> /etc/ssl/certs/16436232.pem
	I0420 01:12:12.806511 1701586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0420 01:12:12.816515 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem --> /etc/ssl/certs/16436232.pem (1708 bytes)
	I0420 01:12:12.843213 1701586 start.go:296] duration metric: took 163.035982ms for postStartSetup
	I0420 01:12:12.843301 1701586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 01:12:12.843344 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m04
	I0420 01:12:12.858533 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34745 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256-m04/id_rsa Username:docker}
	I0420 01:12:12.958592 1701586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0420 01:12:12.963193 1701586 fix.go:56] duration metric: took 4.865573035s for fixHost
	I0420 01:12:12.963220 1701586 start.go:83] releasing machines lock for "ha-159256-m04", held for 4.86562928s
	I0420 01:12:12.963288 1701586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159256-m04
	I0420 01:12:12.981946 1701586 out.go:177] * Found network options:
	I0420 01:12:12.983941 1701586 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0420 01:12:12.985940 1701586 proxy.go:119] fail to check proxy env: Error ip not in block
	W0420 01:12:12.985972 1701586 proxy.go:119] fail to check proxy env: Error ip not in block
	W0420 01:12:12.985996 1701586 proxy.go:119] fail to check proxy env: Error ip not in block
	W0420 01:12:12.986009 1701586 proxy.go:119] fail to check proxy env: Error ip not in block
	I0420 01:12:12.986083 1701586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0420 01:12:12.986136 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m04
	I0420 01:12:12.986401 1701586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0420 01:12:12.986456 1701586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m04
	I0420 01:12:13.004213 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34745 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256-m04/id_rsa Username:docker}
	I0420 01:12:13.004738 1701586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34745 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256-m04/id_rsa Username:docker}
	I0420 01:12:13.326165 1701586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0420 01:12:13.333885 1701586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:12:13.349210 1701586 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0420 01:12:13.349287 1701586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0420 01:12:13.360240 1701586 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0420 01:12:13.360267 1701586 start.go:494] detecting cgroup driver to use...
	I0420 01:12:13.360299 1701586 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0420 01:12:13.360351 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0420 01:12:13.375707 1701586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0420 01:12:13.391900 1701586 docker.go:217] disabling cri-docker service (if available) ...
	I0420 01:12:13.391968 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0420 01:12:13.407870 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0420 01:12:13.423656 1701586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0420 01:12:13.553932 1701586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0420 01:12:13.667392 1701586 docker.go:233] disabling docker service ...
	I0420 01:12:13.667460 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0420 01:12:13.680641 1701586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0420 01:12:13.693131 1701586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0420 01:12:13.793322 1701586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0420 01:12:13.881495 1701586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0420 01:12:13.894110 1701586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0420 01:12:13.912017 1701586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0420 01:12:13.912150 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:13.925923 1701586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0420 01:12:13.926045 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:13.942021 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:13.952320 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:13.962697 1701586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0420 01:12:13.982866 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:13.993152 1701586 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:14.005442 1701586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0420 01:12:14.017388 1701586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0420 01:12:14.028269 1701586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0420 01:12:14.037483 1701586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:12:14.146067 1701586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0420 01:12:14.280864 1701586 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0420 01:12:14.280981 1701586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0420 01:12:14.286800 1701586 start.go:562] Will wait 60s for crictl version
	I0420 01:12:14.286869 1701586 ssh_runner.go:195] Run: which crictl
	I0420 01:12:14.290829 1701586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0420 01:12:14.335571 1701586 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0420 01:12:14.335720 1701586 ssh_runner.go:195] Run: crio --version
	I0420 01:12:14.377380 1701586 ssh_runner.go:195] Run: crio --version
	I0420 01:12:14.424702 1701586 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0420 01:12:14.426929 1701586 out.go:177]   - env NO_PROXY=192.168.49.2
	I0420 01:12:14.429079 1701586 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0420 01:12:14.431154 1701586 cli_runner.go:164] Run: docker network inspect ha-159256 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0420 01:12:14.449741 1701586 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0420 01:12:14.454422 1701586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:12:14.465599 1701586 mustload.go:65] Loading cluster: ha-159256
	I0420 01:12:14.465838 1701586 config.go:182] Loaded profile config "ha-159256": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:12:14.466101 1701586 cli_runner.go:164] Run: docker container inspect ha-159256 --format={{.State.Status}}
	I0420 01:12:14.482135 1701586 host.go:66] Checking if "ha-159256" exists ...
	I0420 01:12:14.482476 1701586 certs.go:68] Setting up /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256 for IP: 192.168.49.5
	I0420 01:12:14.482490 1701586 certs.go:194] generating shared ca certs ...
	I0420 01:12:14.482505 1701586 certs.go:226] acquiring lock for ca certs: {Name:mkf02d2bd3e0f29e12b7cec7c5b9a48566830288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 01:12:14.482629 1701586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key
	I0420 01:12:14.482675 1701586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key
	I0420 01:12:14.482690 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0420 01:12:14.482707 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0420 01:12:14.482719 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0420 01:12:14.482734 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0420 01:12:14.482787 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/1643623.pem (1338 bytes)
	W0420 01:12:14.482821 1701586 certs.go:480] ignoring /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/1643623_empty.pem, impossibly tiny 0 bytes
	I0420 01:12:14.482833 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca-key.pem (1679 bytes)
	I0420 01:12:14.482857 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/ca.pem (1082 bytes)
	I0420 01:12:14.482885 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/cert.pem (1123 bytes)
	I0420 01:12:14.482915 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/key.pem (1675 bytes)
	I0420 01:12:14.482960 1701586 certs.go:484] found cert: /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem (1708 bytes)
	I0420 01:12:14.482991 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/1643623.pem -> /usr/share/ca-certificates/1643623.pem
	I0420 01:12:14.483003 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem -> /usr/share/ca-certificates/16436232.pem
	I0420 01:12:14.483014 1701586 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:12:14.483032 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0420 01:12:14.508376 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0420 01:12:14.541413 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0420 01:12:14.566382 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0420 01:12:14.590997 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/certs/1643623.pem --> /usr/share/ca-certificates/1643623.pem (1338 bytes)
	I0420 01:12:14.617163 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/ssl/certs/16436232.pem --> /usr/share/ca-certificates/16436232.pem (1708 bytes)
	I0420 01:12:14.644673 1701586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0420 01:12:14.674933 1701586 ssh_runner.go:195] Run: openssl version
	I0420 01:12:14.680276 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1643623.pem && ln -fs /usr/share/ca-certificates/1643623.pem /etc/ssl/certs/1643623.pem"
	I0420 01:12:14.694944 1701586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1643623.pem
	I0420 01:12:14.698603 1701586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 20 00:57 /usr/share/ca-certificates/1643623.pem
	I0420 01:12:14.698695 1701586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1643623.pem
	I0420 01:12:14.705744 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1643623.pem /etc/ssl/certs/51391683.0"
	I0420 01:12:14.715157 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16436232.pem && ln -fs /usr/share/ca-certificates/16436232.pem /etc/ssl/certs/16436232.pem"
	I0420 01:12:14.726505 1701586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16436232.pem
	I0420 01:12:14.730255 1701586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 20 00:57 /usr/share/ca-certificates/16436232.pem
	I0420 01:12:14.730322 1701586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16436232.pem
	I0420 01:12:14.737464 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16436232.pem /etc/ssl/certs/3ec20f2e.0"
	I0420 01:12:14.747282 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0420 01:12:14.757072 1701586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:12:14.760767 1701586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 20 00:46 /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:12:14.760835 1701586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0420 01:12:14.768648 1701586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0420 01:12:14.778283 1701586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0420 01:12:14.781690 1701586 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0420 01:12:14.781735 1701586 kubeadm.go:928] updating node {m04 192.168.49.5 0 v1.30.0  false true} ...
	I0420 01:12:14.781840 1701586 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-159256-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-159256 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0420 01:12:14.781908 1701586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0420 01:12:14.790585 1701586 binaries.go:44] Found k8s binaries, skipping transfer
	I0420 01:12:14.790708 1701586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0420 01:12:14.799384 1701586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0420 01:12:14.817466 1701586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0420 01:12:14.838563 1701586 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0420 01:12:14.842270 1701586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0420 01:12:14.853382 1701586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:12:14.949328 1701586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:12:14.961332 1701586 start.go:234] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0420 01:12:14.964878 1701586 out.go:177] * Verifying Kubernetes components...
	I0420 01:12:14.961854 1701586 config.go:182] Loaded profile config "ha-159256": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:12:14.967016 1701586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0420 01:12:15.081725 1701586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0420 01:12:15.096623 1701586 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 01:12:15.096919 1701586 kapi.go:59] client config for ha-159256: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/client.crt", KeyFile:"/home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/ha-159256/client.key", CAFile:"/home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17a1410), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0420 01:12:15.096990 1701586 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0420 01:12:15.097217 1701586 node_ready.go:35] waiting up to 6m0s for node "ha-159256-m04" to be "Ready" ...
	I0420 01:12:15.097296 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:15.097306 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:15.097317 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:15.097327 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:15.100652 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:15.598259 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:15.598283 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:15.598293 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:15.598297 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:15.601081 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:16.098062 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:16.098087 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:16.098097 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:16.098101 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:16.100893 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:16.598180 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:16.598202 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:16.598211 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:16.598233 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:16.610747 1701586 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0420 01:12:17.098063 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:17.098107 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:17.098117 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:17.098123 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:17.101016 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:17.101712 1701586 node_ready.go:53] node "ha-159256-m04" has status "Ready":"Unknown"
	I0420 01:12:17.597751 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:17.597774 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:17.597797 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:17.597803 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:17.607690 1701586 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0420 01:12:18.098024 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:18.098052 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:18.098062 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:18.098067 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:18.101451 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:18.598225 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:18.598252 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:18.598270 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:18.598275 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:18.601698 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:19.097445 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:19.097476 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:19.097486 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:19.097490 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:19.100290 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:19.597942 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:19.597970 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:19.597979 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:19.597984 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:19.600825 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:19.601868 1701586 node_ready.go:53] node "ha-159256-m04" has status "Ready":"Unknown"
	I0420 01:12:20.097473 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:20.097500 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:20.097510 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:20.097516 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:20.100475 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:20.598031 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:20.598052 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:20.598061 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:20.598065 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:20.600886 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:21.097781 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:21.097802 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:21.097812 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:21.097816 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:21.100899 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:21.598145 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:21.598165 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:21.598174 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:21.598179 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:21.600699 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:22.097997 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:22.098020 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.098029 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.098033 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.100654 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:22.101384 1701586 node_ready.go:49] node "ha-159256-m04" has status "Ready":"True"
	I0420 01:12:22.101404 1701586 node_ready.go:38] duration metric: took 7.004167342s for node "ha-159256-m04" to be "Ready" ...
	I0420 01:12:22.101413 1701586 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:12:22.101473 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0420 01:12:22.101486 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.101495 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.101499 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.106998 1701586 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 01:12:22.113870 1701586 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f6s2n" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:22.114007 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f6s2n
	I0420 01:12:22.114025 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.114034 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.114039 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.116749 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:22.117500 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:22.117516 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.117525 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.117555 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.119793 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:22.120549 1701586 pod_ready.go:92] pod "coredns-7db6d8ff4d-f6s2n" in "kube-system" namespace has status "Ready":"True"
	I0420 01:12:22.120572 1701586 pod_ready.go:81] duration metric: took 6.668772ms for pod "coredns-7db6d8ff4d-f6s2n" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:22.120598 1701586 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h2b7f" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:22.120672 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-h2b7f
	I0420 01:12:22.120680 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.120688 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.120692 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.123028 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:22.123913 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:22.123930 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.123939 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.123943 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.126454 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:22.127097 1701586 pod_ready.go:92] pod "coredns-7db6d8ff4d-h2b7f" in "kube-system" namespace has status "Ready":"True"
	I0420 01:12:22.127115 1701586 pod_ready.go:81] duration metric: took 6.503476ms for pod "coredns-7db6d8ff4d-h2b7f" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:22.127127 1701586 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-159256" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:22.127186 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-159256
	I0420 01:12:22.127200 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.127208 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.127213 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.129602 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:22.130233 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:22.130248 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.130256 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.130259 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.132654 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:22.133413 1701586 pod_ready.go:92] pod "etcd-ha-159256" in "kube-system" namespace has status "Ready":"True"
	I0420 01:12:22.133431 1701586 pod_ready.go:81] duration metric: took 6.298058ms for pod "etcd-ha-159256" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:22.133456 1701586 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:22.133569 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-159256-m02
	I0420 01:12:22.133578 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.133586 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.133593 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.136050 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:22.136762 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:12:22.136782 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.136792 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.136796 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.139256 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:22.140042 1701586 pod_ready.go:92] pod "etcd-ha-159256-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 01:12:22.140063 1701586 pod_ready.go:81] duration metric: took 6.587051ms for pod "etcd-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:22.140087 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-159256" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:22.298302 1701586 request.go:629] Waited for 158.148534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:22.298396 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:22.298406 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.298415 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.298419 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.301333 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:22.498427 1701586 request.go:629] Waited for 196.310043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:22.498513 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:22.498575 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.498589 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.498594 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.501222 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:22.698110 1701586 request.go:629] Waited for 57.176813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:22.698208 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:22.698214 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.698223 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.698226 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.701205 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:22.898402 1701586 request.go:629] Waited for 196.364376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:22.898460 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:22.898466 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:22.898475 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:22.898482 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:22.901829 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:23.141026 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:23.141047 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:23.141055 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:23.141061 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:23.143955 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:23.298040 1701586 request.go:629] Waited for 153.225057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:23.298098 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:23.298105 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:23.298119 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:23.298124 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:23.301323 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:23.640663 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:23.640683 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:23.640693 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:23.640697 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:23.643693 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:23.698693 1701586 request.go:629] Waited for 54.188313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:23.698785 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:23.698798 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:23.698808 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:23.698813 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:23.701735 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:24.140590 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:24.140611 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:24.140621 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:24.140626 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:24.144750 1701586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 01:12:24.145998 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:24.146012 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:24.146021 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:24.146025 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:24.149688 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:24.150262 1701586 pod_ready.go:102] pod "kube-apiserver-ha-159256" in "kube-system" namespace has status "Ready":"False"
	I0420 01:12:24.640568 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:24.640593 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:24.640601 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:24.640606 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:24.643623 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:24.644516 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:24.644535 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:24.644545 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:24.644549 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:24.647422 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:25.140923 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:25.140959 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:25.140968 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:25.140974 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:25.144245 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:25.145034 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:25.145057 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:25.145066 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:25.145070 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:25.147982 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:25.640337 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:25.640412 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:25.640436 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:25.640457 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:25.644811 1701586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 01:12:25.645796 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:25.645816 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:25.645833 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:25.645837 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:25.648457 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:26.140889 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:26.140910 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:26.140920 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:26.140927 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:26.143701 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:26.144551 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:26.144570 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:26.144580 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:26.144584 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:26.146899 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:26.640302 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:26.640326 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:26.640336 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:26.640341 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:26.646416 1701586 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 01:12:26.647311 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:26.647334 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:26.647344 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:26.647350 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:26.649746 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:26.650747 1701586 pod_ready.go:102] pod "kube-apiserver-ha-159256" in "kube-system" namespace has status "Ready":"False"
	I0420 01:12:27.140273 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:27.140292 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:27.140301 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:27.140305 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:27.143175 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:27.143965 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:27.143990 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:27.143999 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:27.144004 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:27.146710 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:27.640611 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:27.640633 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:27.640643 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:27.640648 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:27.648369 1701586 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0420 01:12:27.649306 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:27.649325 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:27.649342 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:27.649347 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:27.653915 1701586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 01:12:28.141197 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:28.141222 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:28.141231 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:28.141235 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:28.144159 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:28.144915 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:28.144932 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:28.144941 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:28.144949 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:28.147408 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:28.641189 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:28.641210 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:28.641220 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:28.641223 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:28.644644 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:28.645581 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:28.645628 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:28.645651 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:28.645673 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:28.648250 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:29.141201 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:29.141226 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:29.141236 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:29.141240 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:29.144856 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:29.146368 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:29.146435 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:29.146460 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:29.146480 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:29.149404 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:29.150324 1701586 pod_ready.go:102] pod "kube-apiserver-ha-159256" in "kube-system" namespace has status "Ready":"False"
	I0420 01:12:29.641072 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:29.641097 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:29.641106 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:29.641111 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:29.644090 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:29.645084 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:29.645105 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:29.645115 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:29.645121 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:29.647825 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:30.140391 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:30.140414 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:30.140425 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:30.140432 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:30.143789 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:30.144803 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:30.144831 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:30.144842 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:30.144849 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:30.147985 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:30.640348 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:30.640371 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:30.640381 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:30.640386 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:30.643229 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:30.644177 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:30.644230 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:30.644246 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:30.644252 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:30.646927 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:31.140411 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:31.140438 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:31.140448 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:31.140453 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:31.143559 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:31.144370 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:31.144388 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:31.144397 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:31.144402 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:31.149715 1701586 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 01:12:31.150833 1701586 pod_ready.go:102] pod "kube-apiserver-ha-159256" in "kube-system" namespace has status "Ready":"False"
	I0420 01:12:31.640251 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:31.640275 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:31.640284 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:31.640289 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:31.643264 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:31.644022 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:31.644041 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:31.644050 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:31.644054 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:31.646974 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:32.140229 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:32.140257 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:32.140267 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:32.140272 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:32.143165 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:32.143913 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:32.143932 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:32.143942 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:32.143946 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:32.146760 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:32.641016 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:32.641039 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:32.641059 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:32.641063 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:32.644010 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:32.644838 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:32.644891 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:32.644914 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:32.644924 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:32.647657 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:33.140895 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:33.140928 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:33.140938 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:33.140947 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:33.144082 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:33.144758 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:33.144777 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:33.144786 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:33.144793 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:33.147383 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:33.641080 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256
	I0420 01:12:33.641102 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:33.641111 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:33.641116 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:33.702325 1701586 round_trippers.go:574] Response Status: 200 OK in 61 milliseconds
	I0420 01:12:33.706969 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:33.707032 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:33.707057 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:33.707078 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:33.735044 1701586 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0420 01:12:33.737875 1701586 pod_ready.go:97] node "ha-159256" hosting pod "kube-apiserver-ha-159256" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-159256" has status "Ready":"Unknown"
	I0420 01:12:33.737952 1701586 pod_ready.go:81] duration metric: took 11.597854074s for pod "kube-apiserver-ha-159256" in "kube-system" namespace to be "Ready" ...
	E0420 01:12:33.737979 1701586 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-159256" hosting pod "kube-apiserver-ha-159256" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-159256" has status "Ready":"Unknown"
	I0420 01:12:33.738015 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:33.738123 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256-m02
	I0420 01:12:33.738148 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:33.738171 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:33.738193 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:33.748358 1701586 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0420 01:12:33.750153 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:12:33.750215 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:33.750239 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:33.750260 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:33.755570 1701586 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0420 01:12:33.756363 1701586 pod_ready.go:92] pod "kube-apiserver-ha-159256-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 01:12:33.756418 1701586 pod_ready.go:81] duration metric: took 18.375437ms for pod "kube-apiserver-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:33.756445 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-159256" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:33.756551 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-159256
	I0420 01:12:33.756577 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:33.756600 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:33.756622 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:33.764537 1701586 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0420 01:12:33.765641 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:33.765698 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:33.765722 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:33.765741 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:33.777570 1701586 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0420 01:12:33.778378 1701586 pod_ready.go:97] node "ha-159256" hosting pod "kube-controller-manager-ha-159256" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-159256" has status "Ready":"Unknown"
	I0420 01:12:33.778443 1701586 pod_ready.go:81] duration metric: took 21.976367ms for pod "kube-controller-manager-ha-159256" in "kube-system" namespace to be "Ready" ...
	E0420 01:12:33.778468 1701586 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-159256" hosting pod "kube-controller-manager-ha-159256" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-159256" has status "Ready":"Unknown"
	I0420 01:12:33.778505 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:33.778608 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-159256-m02
	I0420 01:12:33.778632 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:33.778653 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:33.778674 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:33.783523 1701586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0420 01:12:33.784391 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:12:33.784449 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:33.784474 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:33.784495 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:33.797776 1701586 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0420 01:12:33.799379 1701586 pod_ready.go:92] pod "kube-controller-manager-ha-159256-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 01:12:33.799454 1701586 pod_ready.go:81] duration metric: took 20.921883ms for pod "kube-controller-manager-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:33.799530 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5f79r" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:33.799644 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5f79r
	I0420 01:12:33.799669 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:33.799691 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:33.799710 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:33.809959 1701586 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0420 01:12:33.810685 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m04
	I0420 01:12:33.810735 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:33.810757 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:33.810777 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:33.826371 1701586 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0420 01:12:33.826998 1701586 pod_ready.go:92] pod "kube-proxy-5f79r" in "kube-system" namespace has status "Ready":"True"
	I0420 01:12:33.827052 1701586 pod_ready.go:81] duration metric: took 27.495224ms for pod "kube-proxy-5f79r" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:33.827080 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6hlpp" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:33.841421 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6hlpp
	I0420 01:12:33.841486 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:33.841508 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:33.841565 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:33.844412 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:34.041421 1701586 request.go:629] Waited for 196.344922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:34.041506 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:34.041519 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:34.041575 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:34.041589 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:34.044515 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:34.045097 1701586 pod_ready.go:97] node "ha-159256" hosting pod "kube-proxy-6hlpp" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-159256" has status "Ready":"Unknown"
	I0420 01:12:34.045125 1701586 pod_ready.go:81] duration metric: took 218.024897ms for pod "kube-proxy-6hlpp" in "kube-system" namespace to be "Ready" ...
	E0420 01:12:34.045136 1701586 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-159256" hosting pod "kube-proxy-6hlpp" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-159256" has status "Ready":"Unknown"
	I0420 01:12:34.045162 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f26nw" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:34.241613 1701586 request.go:629] Waited for 196.364778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f26nw
	I0420 01:12:34.241673 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f26nw
	I0420 01:12:34.241683 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:34.241692 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:34.241697 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:34.244372 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:34.441362 1701586 request.go:629] Waited for 196.318396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:12:34.441459 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:12:34.441492 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:34.441509 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:34.441515 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:34.444407 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:34.445343 1701586 pod_ready.go:92] pod "kube-proxy-f26nw" in "kube-system" namespace has status "Ready":"True"
	I0420 01:12:34.445366 1701586 pod_ready.go:81] duration metric: took 400.191068ms for pod "kube-proxy-f26nw" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:34.445379 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-159256" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:34.641388 1701586 request.go:629] Waited for 195.92649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-159256
	I0420 01:12:34.641471 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-159256
	I0420 01:12:34.641483 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:34.641492 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:34.641496 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:34.644389 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:34.841300 1701586 request.go:629] Waited for 196.292985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:34.841402 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256
	I0420 01:12:34.841450 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:34.841466 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:34.841472 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:34.844216 1701586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0420 01:12:34.844867 1701586 pod_ready.go:97] node "ha-159256" hosting pod "kube-scheduler-ha-159256" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-159256" has status "Ready":"Unknown"
	I0420 01:12:34.844892 1701586 pod_ready.go:81] duration metric: took 399.505632ms for pod "kube-scheduler-ha-159256" in "kube-system" namespace to be "Ready" ...
	E0420 01:12:34.844929 1701586 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-159256" hosting pod "kube-scheduler-ha-159256" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-159256" has status "Ready":"Unknown"
	I0420 01:12:34.844944 1701586 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:35.041368 1701586 request.go:629] Waited for 196.313818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-159256-m02
	I0420 01:12:35.041480 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-159256-m02
	I0420 01:12:35.041494 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:35.041518 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:35.041578 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:35.044629 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:35.241402 1701586 request.go:629] Waited for 196.120166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:12:35.241469 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-159256-m02
	I0420 01:12:35.241524 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:35.241557 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:35.241562 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:35.248039 1701586 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0420 01:12:35.248943 1701586 pod_ready.go:92] pod "kube-scheduler-ha-159256-m02" in "kube-system" namespace has status "Ready":"True"
	I0420 01:12:35.248962 1701586 pod_ready.go:81] duration metric: took 404.010019ms for pod "kube-scheduler-ha-159256-m02" in "kube-system" namespace to be "Ready" ...
	I0420 01:12:35.248975 1701586 pod_ready.go:38] duration metric: took 13.147552357s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0420 01:12:35.248988 1701586 system_svc.go:44] waiting for kubelet service to be running ....
	I0420 01:12:35.249053 1701586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:12:35.261893 1701586 system_svc.go:56] duration metric: took 12.897458ms WaitForService to wait for kubelet
	I0420 01:12:35.261921 1701586 kubeadm.go:576] duration metric: took 20.300544137s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0420 01:12:35.261944 1701586 node_conditions.go:102] verifying NodePressure condition ...
	I0420 01:12:35.441339 1701586 request.go:629] Waited for 179.318131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0420 01:12:35.441399 1701586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0420 01:12:35.441405 1701586 round_trippers.go:469] Request Headers:
	I0420 01:12:35.441414 1701586 round_trippers.go:473]     Accept: application/json, */*
	I0420 01:12:35.441419 1701586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0420 01:12:35.444655 1701586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0420 01:12:35.446580 1701586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0420 01:12:35.446612 1701586 node_conditions.go:123] node cpu capacity is 2
	I0420 01:12:35.446624 1701586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0420 01:12:35.446630 1701586 node_conditions.go:123] node cpu capacity is 2
	I0420 01:12:35.446634 1701586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0420 01:12:35.446639 1701586 node_conditions.go:123] node cpu capacity is 2
	I0420 01:12:35.446644 1701586 node_conditions.go:105] duration metric: took 184.694829ms to run NodePressure ...
	I0420 01:12:35.446657 1701586 start.go:240] waiting for startup goroutines ...
	I0420 01:12:35.446679 1701586 start.go:254] writing updated cluster config ...
	I0420 01:12:35.447011 1701586 ssh_runner.go:195] Run: rm -f paused
	I0420 01:12:35.520214 1701586 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0420 01:12:35.524817 1701586 out.go:177] * Done! kubectl is now configured to use "ha-159256" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.831658299Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8449b24d3ed2611467738c6ad443cde95fdd281c8f8f982ce441131a47eedbbe/merged/etc/group: no such file or directory"
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.873720693Z" level=info msg="Created container 4e4eb172fc392fbc01bd8d599c4d6f09842db3d2a51a36603d9698e9cf2e1795: kube-system/storage-provisioner/storage-provisioner" id=16af1824-6f8b-4757-9927-8c1775f211a4 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.874348613Z" level=info msg="Starting container: 4e4eb172fc392fbc01bd8d599c4d6f09842db3d2a51a36603d9698e9cf2e1795" id=02f98bf4-4688-47e6-b206-1e427cd8f5a5 name=/runtime.v1.RuntimeService/StartContainer
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.882215037Z" level=info msg="Started container" PID=1836 containerID=4e4eb172fc392fbc01bd8d599c4d6f09842db3d2a51a36603d9698e9cf2e1795 description=kube-system/storage-provisioner/storage-provisioner id=02f98bf4-4688-47e6-b206-1e427cd8f5a5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5d12f3e172477f25ba1c607b448dc5391c93c1298d74827dd17d838a9d366611
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.945281321Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.949398971Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.949430478Z" level=info msg="Updated default CNI network name to kindnet"
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.949445197Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.953289616Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.953318357Z" level=info msg="Updated default CNI network name to kindnet"
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.953521576Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.956821436Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.956854132Z" level=info msg="Updated default CNI network name to kindnet"
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.956869451Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.960049560Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Apr 20 01:12:07 ha-159256 crio[643]: time="2024-04-20 01:12:07.960081879Z" level=info msg="Updated default CNI network name to kindnet"
	Apr 20 01:12:17 ha-159256 crio[643]: time="2024-04-20 01:12:17.560540291Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.30.0" id=af641cc9-764c-431d-a45a-d892278a5599 name=/runtime.v1.ImageService/ImageStatus
	Apr 20 01:12:17 ha-159256 crio[643]: time="2024-04-20 01:12:17.560752913Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe registry.k8s.io/kube-controller-manager@sha256:63e991c4fc8bdc8fce68c183d152ba3ab560dc0a9b71ff97332a74a7605bbd3f],Size_:108229958,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=af641cc9-764c-431d-a45a-d892278a5599 name=/runtime.v1.ImageService/ImageStatus
	Apr 20 01:12:17 ha-159256 crio[643]: time="2024-04-20 01:12:17.561983048Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.30.0" id=4c2a4030-feb2-4a35-b1dd-b89ee03d2510 name=/runtime.v1.ImageService/ImageStatus
	Apr 20 01:12:17 ha-159256 crio[643]: time="2024-04-20 01:12:17.562184240Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe registry.k8s.io/kube-controller-manager@sha256:63e991c4fc8bdc8fce68c183d152ba3ab560dc0a9b71ff97332a74a7605bbd3f],Size_:108229958,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=4c2a4030-feb2-4a35-b1dd-b89ee03d2510 name=/runtime.v1.ImageService/ImageStatus
	Apr 20 01:12:17 ha-159256 crio[643]: time="2024-04-20 01:12:17.563300589Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-159256/kube-controller-manager" id=1d3b9a79-4e14-4b8b-91f4-7a59eb652255 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 20 01:12:17 ha-159256 crio[643]: time="2024-04-20 01:12:17.563395503Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 20 01:12:17 ha-159256 crio[643]: time="2024-04-20 01:12:17.638427072Z" level=info msg="Created container 83f05ce1a100aee7a105457bd4c1d3515ae5bbb51532c61b082b362e118597a9: kube-system/kube-controller-manager-ha-159256/kube-controller-manager" id=1d3b9a79-4e14-4b8b-91f4-7a59eb652255 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 20 01:12:17 ha-159256 crio[643]: time="2024-04-20 01:12:17.639532657Z" level=info msg="Starting container: 83f05ce1a100aee7a105457bd4c1d3515ae5bbb51532c61b082b362e118597a9" id=7cc4f68a-4a6a-4f1a-baf7-21e382bdb88d name=/runtime.v1.RuntimeService/StartContainer
	Apr 20 01:12:17 ha-159256 crio[643]: time="2024-04-20 01:12:17.649114600Z" level=info msg="Started container" PID=1917 containerID=83f05ce1a100aee7a105457bd4c1d3515ae5bbb51532c61b082b362e118597a9 description=kube-system/kube-controller-manager-ha-159256/kube-controller-manager id=7cc4f68a-4a6a-4f1a-baf7-21e382bdb88d name=/runtime.v1.RuntimeService/StartContainer sandboxID=665eeb36dd6a1a0f147a7ef515e45b3366cec71e6e8ee15b1428baca7e8ea3db
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	83f05ce1a100a       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1   20 seconds ago       Running             kube-controller-manager   8                   665eeb36dd6a1       kube-controller-manager-ha-159256
	4e4eb172fc392       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   30 seconds ago       Running             storage-provisioner       4                   5d12f3e172477       storage-provisioner
	44374a9444e51       adf781c1312f06f9d22bfc391f48c68e39ed1bfe4166c6ec09faea1a89f23d46   37 seconds ago       Running             kube-vip                  3                   a393b321d3350       kube-vip-ha-159256
	197787ed62a31       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb   41 seconds ago       Running             kube-apiserver            4                   79f5b0aa43e1e       kube-apiserver-ha-159256
	2d0c093ae57eb       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   57 seconds ago       Running             coredns                   2                   0fbd6a5cfcc4f       coredns-7db6d8ff4d-h2b7f
	9bce280a1f3bb       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1   57 seconds ago       Exited              kube-controller-manager   7                   665eeb36dd6a1       kube-controller-manager-ha-159256
	7b4c90d2ceb04       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   58 seconds ago       Running             coredns                   2                   0fead7b2b23a4       coredns-7db6d8ff4d-f6s2n
	3c91d996dc730       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   58 seconds ago       Running             busybox                   2                   8757b87f0fa50       busybox-fc5497c4f-z9cvl
	4803fc113d7af       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d   About a minute ago   Running             kindnet-cni               2                   de084fd8d0fb4       kindnet-nfg5r
	6d4ace814ef4e       cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f   About a minute ago   Running             kube-proxy                2                   5375d12b32931       kube-proxy-6hlpp
	cc9d580e4437a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       3                   5d12f3e172477       storage-provisioner
	a78728ecb5872       547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a   About a minute ago   Running             kube-scheduler            2                   bab316dfa2445       kube-scheduler-ha-159256
	711e075432c6e       adf781c1312f06f9d22bfc391f48c68e39ed1bfe4166c6ec09faea1a89f23d46   About a minute ago   Exited              kube-vip                  2                   a393b321d3350       kube-vip-ha-159256
	de32529a9486b       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb   About a minute ago   Exited              kube-apiserver            3                   79f5b0aa43e1e       kube-apiserver-ha-159256
	11d12fc2305a6       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd   About a minute ago   Running             etcd                      2                   add289b23b0d4       etcd-ha-159256
	
	
	==> coredns [2d0c093ae57eb7a43c64b41a04cc9b52def001713185663e84906919fcb67903] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58480 - 8457 "HINFO IN 5254737632468002906.4122808441419496486. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024687182s
	
	
	==> coredns [7b4c90d2ceb04582b152e267e2ec09656b11faa334bbf66a14ee8c995e5d921a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40175 - 661 "HINFO IN 7645376861926448522.2803838394715529557. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012427137s
	
	
	==> describe nodes <==
	Name:               ha-159256
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-159256
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-159256
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_20T01_01_53_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:01:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-159256
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:11:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 20 Apr 2024 01:11:30 +0000   Sat, 20 Apr 2024 01:12:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 20 Apr 2024 01:11:30 +0000   Sat, 20 Apr 2024 01:12:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 20 Apr 2024 01:11:30 +0000   Sat, 20 Apr 2024 01:12:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 20 Apr 2024 01:11:30 +0000   Sat, 20 Apr 2024 01:12:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-159256
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b81fac3a3eb4553971cfab05c4a25dc
	  System UUID:                8372531b-cc95-4944-a53b-b12d1c87e1b1
	  Boot ID:                    cdaae8f5-66dd-4dda-afdc-9b84bbb262c1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-z9cvl              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  kube-system                 coredns-7db6d8ff4d-f6s2n             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 coredns-7db6d8ff4d-h2b7f             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 etcd-ha-159256                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-nfg5r                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-159256             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-159256    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-6hlpp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-159256             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-159256                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 60s                    kube-proxy       
	  Normal  Starting                 4m28s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-159256 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-159256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-159256 status is now: NodeHasSufficientMemory
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node ha-159256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node ha-159256 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node ha-159256 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           10m                    node-controller  Node ha-159256 event: Registered Node ha-159256 in Controller
	  Normal  NodeReady                10m                    kubelet          Node ha-159256 status is now: NodeReady
	  Normal  RegisteredNode           9m48s                  node-controller  Node ha-159256 event: Registered Node ha-159256 in Controller
	  Normal  RegisteredNode           8m50s                  node-controller  Node ha-159256 event: Registered Node ha-159256 in Controller
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-159256 event: Registered Node ha-159256 in Controller
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-159256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m24s (x8 over 5m24s)  kubelet          Node ha-159256 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-159256 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m24s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m34s                  node-controller  Node ha-159256 event: Registered Node ha-159256 in Controller
	  Normal  RegisteredNode           3m42s                  node-controller  Node ha-159256 event: Registered Node ha-159256 in Controller
	  Normal  RegisteredNode           3m20s                  node-controller  Node ha-159256 event: Registered Node ha-159256 in Controller
	  Normal  Starting                 115s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)    kubelet          Node ha-159256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)    kubelet          Node ha-159256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)    kubelet          Node ha-159256 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           65s                    node-controller  Node ha-159256 event: Registered Node ha-159256 in Controller
	  Normal  RegisteredNode           8s                     node-controller  Node ha-159256 event: Registered Node ha-159256 in Controller
	  Normal  NodeNotReady             5s                     node-controller  Node ha-159256 status is now: NodeNotReady
	
	
	Name:               ha-159256-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-159256-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-159256
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T01_02_34_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:02:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-159256-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:12:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:11:24 +0000   Sat, 20 Apr 2024 01:02:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:11:24 +0000   Sat, 20 Apr 2024 01:02:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:11:24 +0000   Sat, 20 Apr 2024 01:02:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:11:24 +0000   Sat, 20 Apr 2024 01:03:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-159256-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 7323f9ff8d664ddcb11279dbd6a0b2f7
	  System UUID:                b23d5989-80b1-4744-a676-ffa306e83383
	  Boot ID:                    cdaae8f5-66dd-4dda-afdc-9b84bbb262c1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-57n5m                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  kube-system                 etcd-ha-159256-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-zcfdw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-159256-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-159256-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-f26nw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-159256-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-159256-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (1%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m19s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 72s                    kube-proxy       
	  Normal  Starting                 4m42s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-159256-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-159256-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-159256-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           10m                    node-controller  Node ha-159256-m02 event: Registered Node ha-159256-m02 in Controller
	  Normal  RegisteredNode           9m48s                  node-controller  Node ha-159256-m02 event: Registered Node ha-159256-m02 in Controller
	  Normal  RegisteredNode           8m50s                  node-controller  Node ha-159256-m02 event: Registered Node ha-159256-m02 in Controller
	  Normal  Starting                 6m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m50s (x8 over 6m51s)  kubelet          Node ha-159256-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    6m50s (x8 over 6m51s)  kubelet          Node ha-159256-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  6m50s (x8 over 6m51s)  kubelet          Node ha-159256-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-159256-m02 event: Registered Node ha-159256-m02 in Controller
	  Normal  NodeHasSufficientPID     5m22s (x8 over 5m22s)  kubelet          Node ha-159256-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node ha-159256-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node ha-159256-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           4m34s                  node-controller  Node ha-159256-m02 event: Registered Node ha-159256-m02 in Controller
	  Normal  RegisteredNode           3m42s                  node-controller  Node ha-159256-m02 event: Registered Node ha-159256-m02 in Controller
	  Normal  RegisteredNode           3m20s                  node-controller  Node ha-159256-m02 event: Registered Node ha-159256-m02 in Controller
	  Normal  Starting                 113s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)    kubelet          Node ha-159256-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)    kubelet          Node ha-159256-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x8 over 113s)    kubelet          Node ha-159256-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           65s                    node-controller  Node ha-159256-m02 event: Registered Node ha-159256-m02 in Controller
	  Normal  RegisteredNode           8s                     node-controller  Node ha-159256-m02 event: Registered Node ha-159256-m02 in Controller
	
	
	Name:               ha-159256-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-159256-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=910ae0f62f2dcf448782075db183a042c84a625e
	                    minikube.k8s.io/name=ha-159256
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_20T01_04_34_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 Apr 2024 01:04:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-159256-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 Apr 2024 01:12:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 Apr 2024 01:12:22 +0000   Sat, 20 Apr 2024 01:12:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 Apr 2024 01:12:22 +0000   Sat, 20 Apr 2024 01:12:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 Apr 2024 01:12:22 +0000   Sat, 20 Apr 2024 01:12:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 Apr 2024 01:12:22 +0000   Sat, 20 Apr 2024 01:12:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-159256-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 19c5af7e537b46008e9bb80aa36d9a98
	  System UUID:                3861879c-decd-4dd8-a1a1-661ff3fdb412
	  Boot ID:                    cdaae8f5-66dd-4dda-afdc-9b84bbb262c1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pbcjt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  kube-system                 kindnet-x7psn              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m5s
	  kube-system                 kube-proxy-5f79r           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 8m3s                  kube-proxy       
	  Normal  Starting                 9s                    kube-proxy       
	  Normal  Starting                 2m54s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    8m5s                  kubelet          Node ha-159256-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m5s                  kubelet          Node ha-159256-m04 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m5s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     8m5s                  kubelet          Node ha-159256-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m3s                  node-controller  Node ha-159256-m04 event: Registered Node ha-159256-m04 in Controller
	  Normal  RegisteredNode           8m3s                  node-controller  Node ha-159256-m04 event: Registered Node ha-159256-m04 in Controller
	  Normal  RegisteredNode           8m                    node-controller  Node ha-159256-m04 event: Registered Node ha-159256-m04 in Controller
	  Normal  NodeReady                7m33s                 kubelet          Node ha-159256-m04 status is now: NodeReady
	  Normal  RegisteredNode           6m3s                  node-controller  Node ha-159256-m04 event: Registered Node ha-159256-m04 in Controller
	  Normal  RegisteredNode           4m34s                 node-controller  Node ha-159256-m04 event: Registered Node ha-159256-m04 in Controller
	  Normal  NodeNotReady             3m54s                 node-controller  Node ha-159256-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           3m42s                 node-controller  Node ha-159256-m04 event: Registered Node ha-159256-m04 in Controller
	  Normal  RegisteredNode           3m20s                 node-controller  Node ha-159256-m04 event: Registered Node ha-159256-m04 in Controller
	  Normal  Starting                 3m16s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m3s (x8 over 3m16s)  kubelet          Node ha-159256-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x8 over 3m16s)  kubelet          Node ha-159256-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x8 over 3m16s)  kubelet          Node ha-159256-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           65s                   node-controller  Node ha-159256-m04 event: Registered Node ha-159256-m04 in Controller
	  Normal  Starting                 29s                   kubelet          Starting kubelet.
	  Normal  NodeNotReady             25s                   node-controller  Node ha-159256-m04 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  17s (x8 over 29s)     kubelet          Node ha-159256-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x8 over 29s)     kubelet          Node ha-159256-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x8 over 29s)     kubelet          Node ha-159256-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8s                    node-controller  Node ha-159256-m04 event: Registered Node ha-159256-m04 in Controller
	
	
	==> dmesg <==
	[  +0.001061] FS-Cache: O-key=[8] '03dac90000000000'
	[  +0.000717] FS-Cache: N-cookie c=00000065 [p=0000005c fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000029c0a1a
	[  +0.001059] FS-Cache: N-key=[8] '03dac90000000000'
	[  +0.005336] FS-Cache: Duplicate cookie detected
	[  +0.000779] FS-Cache: O-cookie c=0000005f [p=0000005c fl=226 nc=0 na=1]
	[  +0.001023] FS-Cache: O-cookie d=00000000ead4e9ad{9p.inode} n=000000007a57d0bb
	[  +0.001258] FS-Cache: O-key=[8] '03dac90000000000'
	[  +0.000764] FS-Cache: N-cookie c=00000066 [p=0000005c fl=2 nc=0 na=1]
	[  +0.000951] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000f46698ff
	[  +0.001060] FS-Cache: N-key=[8] '03dac90000000000'
	[  +2.313676] FS-Cache: Duplicate cookie detected
	[  +0.000737] FS-Cache: O-cookie c=0000005d [p=0000005c fl=226 nc=0 na=1]
	[  +0.000948] FS-Cache: O-cookie d=00000000ead4e9ad{9p.inode} n=00000000b6025997
	[  +0.001081] FS-Cache: O-key=[8] '02dac90000000000'
	[  +0.000787] FS-Cache: N-cookie c=00000068 [p=0000005c fl=2 nc=0 na=1]
	[  +0.000988] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000f36d27e2
	[  +0.001030] FS-Cache: N-key=[8] '02dac90000000000'
	[  +0.347775] FS-Cache: Duplicate cookie detected
	[  +0.000769] FS-Cache: O-cookie c=00000062 [p=0000005c fl=226 nc=0 na=1]
	[  +0.000925] FS-Cache: O-cookie d=00000000ead4e9ad{9p.inode} n=00000000be586629
	[  +0.001194] FS-Cache: O-key=[8] '08dac90000000000'
	[  +0.000709] FS-Cache: N-cookie c=00000069 [p=0000005c fl=2 nc=0 na=1]
	[  +0.000915] FS-Cache: N-cookie d=00000000ead4e9ad{9p.inode} n=00000000acbd1b23
	[  +0.001038] FS-Cache: N-key=[8] '08dac90000000000'
	
	
	==> etcd [11d12fc2305a697a4f4b8664be5757470a2dd2e67953e0d2ad9e09193b66e768] <==
	{"level":"warn","ts":"2024-04-20T01:11:18.832819Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:11:08.798325Z","time spent":"10.034488123s","remote":"127.0.0.1:37122","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:10000 "}
	{"level":"warn","ts":"2024-04-20T01:11:18.832837Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"10.034529844s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-04-20T01:11:18.832848Z","caller":"traceutil/trace.go:171","msg":"trace[1982578402] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; }","duration":"10.034541101s","start":"2024-04-20T01:11:08.798303Z","end":"2024-04-20T01:11:18.832844Z","steps":["trace[1982578402] 'agreement among raft nodes before linearized reading'  (duration: 10.03452991s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T01:11:18.83286Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:11:08.798276Z","time spent":"10.034579977s","remote":"127.0.0.1:37122","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:500 "}
	{"level":"info","ts":"2024-04-20T01:11:18.830469Z","caller":"traceutil/trace.go:171","msg":"trace[1563279505] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; }","duration":"10.033117364s","start":"2024-04-20T01:11:08.797348Z","end":"2024-04-20T01:11:18.830466Z","steps":["trace[1563279505] 'agreement among raft nodes before linearized reading'  (duration: 10.033104335s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T01:11:18.832967Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:11:08.797338Z","time spent":"10.035621382s","remote":"127.0.0.1:42540","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:10000 "}
	{"level":"warn","ts":"2024-04-20T01:11:18.83057Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"10.033307455s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-04-20T01:11:18.833109Z","caller":"traceutil/trace.go:171","msg":"trace[1026828874] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; }","duration":"10.035842225s","start":"2024-04-20T01:11:08.79725Z","end":"2024-04-20T01:11:18.833092Z","steps":["trace[1026828874] 'agreement among raft nodes before linearized reading'  (duration: 10.033316127s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T01:11:18.833137Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:11:08.797238Z","time spent":"10.035886934s","remote":"127.0.0.1:37334","response type":"/etcdserverpb.KV/Range","request count":0,"request size":95,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:10000 "}
	{"level":"warn","ts":"2024-04-20T01:11:18.830716Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"10.043033288s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-04-20T01:11:18.833216Z","caller":"traceutil/trace.go:171","msg":"trace[2024036941] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; }","duration":"10.045532112s","start":"2024-04-20T01:11:08.787678Z","end":"2024-04-20T01:11:18.83321Z","steps":["trace[2024036941] 'agreement among raft nodes before linearized reading'  (duration: 10.04303364s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T01:11:18.833237Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:11:08.787637Z","time spent":"10.045591548s","remote":"127.0.0.1:37162","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:10000 "}
	{"level":"warn","ts":"2024-04-20T01:11:18.830733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"10.051334086s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-04-20T01:11:18.833259Z","caller":"traceutil/trace.go:171","msg":"trace[811848305] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; }","duration":"10.053862013s","start":"2024-04-20T01:11:08.779392Z","end":"2024-04-20T01:11:18.833254Z","steps":["trace[811848305] 'agreement among raft nodes before linearized reading'  (duration: 10.051333782s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T01:11:18.833274Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:11:08.779295Z","time spent":"10.053973017s","remote":"127.0.0.1:42378","response type":"/etcdserverpb.KV/Range","request count":0,"request size":121,"response count":0,"response size":0,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 "}
	{"level":"warn","ts":"2024-04-20T01:11:18.831029Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:11:08.77907Z","time spent":"10.051955188s","remote":"127.0.0.1:37054","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":0,"response size":0,"request content":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:10000 "}
	{"level":"warn","ts":"2024-04-20T01:11:18.830583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"10.033356389s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-04-20T01:11:18.835947Z","caller":"traceutil/trace.go:171","msg":"trace[1211527001] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; }","duration":"10.038711353s","start":"2024-04-20T01:11:08.797222Z","end":"2024-04-20T01:11:18.835934Z","steps":["trace[1211527001] 'agreement among raft nodes before linearized reading'  (duration: 10.033356282s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T01:11:18.836005Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:11:08.797169Z","time spent":"10.038811863s","remote":"127.0.0.1:42428","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 "}
	{"level":"warn","ts":"2024-04-20T01:11:18.83237Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:11:08.791506Z","time spent":"10.040849055s","remote":"127.0.0.1:37072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":0,"response size":0,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" limit:10000 "}
	{"level":"info","ts":"2024-04-20T01:11:18.84455Z","caller":"etcdserver/v3_server.go:889","msg":"first commit in current term: resending ReadIndex request"}
	{"level":"warn","ts":"2024-04-20T01:11:18.852733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.04574096s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-159256-m02\" ","response":"range_response_count:1 size:6101"}
	{"level":"info","ts":"2024-04-20T01:11:18.857602Z","caller":"traceutil/trace.go:171","msg":"trace[1073249684] range","detail":"{range_begin:/registry/minions/ha-159256-m02; range_end:; response_count:1; response_revision:2544; }","duration":"4.050618136s","start":"2024-04-20T01:11:14.806967Z","end":"2024-04-20T01:11:18.857585Z","steps":["trace[1073249684] 'agreement among raft nodes before linearized reading'  (duration: 4.045601189s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-20T01:11:18.858683Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-20T01:11:14.806933Z","time spent":"4.05172624s","remote":"127.0.0.1:42512","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":6125,"request content":"key:\"/registry/minions/ha-159256-m02\" "}
	
	
	==> kernel <==
	 01:12:38 up  7:55,  0 users,  load average: 1.89, 2.43, 2.04
	Linux ha-159256 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [4803fc113d7af109005e6fcfdd8ce0fd01cdf3130958fc7f709141884b5cb3e2] <==
	I0420 01:12:07.945008       1 main.go:227] handling current node
	I0420 01:12:07.948367       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0420 01:12:07.948398       1 main.go:250] Node ha-159256-m02 has CIDR [10.244.1.0/24] 
	I0420 01:12:07.948622       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	I0420 01:12:07.948686       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0420 01:12:07.948694       1 main.go:250] Node ha-159256-m04 has CIDR [10.244.3.0/24] 
	I0420 01:12:07.948729       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0420 01:12:17.956524       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 01:12:17.956630       1 main.go:227] handling current node
	I0420 01:12:17.956667       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0420 01:12:17.956702       1 main.go:250] Node ha-159256-m02 has CIDR [10.244.1.0/24] 
	I0420 01:12:17.956826       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0420 01:12:17.956862       1 main.go:250] Node ha-159256-m04 has CIDR [10.244.3.0/24] 
	I0420 01:12:27.974417       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 01:12:27.974538       1 main.go:227] handling current node
	I0420 01:12:27.974575       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0420 01:12:27.974621       1 main.go:250] Node ha-159256-m02 has CIDR [10.244.1.0/24] 
	I0420 01:12:27.974781       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0420 01:12:27.974820       1 main.go:250] Node ha-159256-m04 has CIDR [10.244.3.0/24] 
	I0420 01:12:37.988821       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0420 01:12:37.988971       1 main.go:227] handling current node
	I0420 01:12:37.989008       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0420 01:12:37.989039       1 main.go:250] Node ha-159256-m02 has CIDR [10.244.1.0/24] 
	I0420 01:12:37.989165       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0420 01:12:37.989223       1 main.go:250] Node ha-159256-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [197787ed62a318156b0a9509403ff3f0e41c36aca3ec185256f796d815ba99b6] <==
	I0420 01:11:59.233469       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0420 01:11:59.267427       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0420 01:11:59.267559       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0420 01:11:59.268298       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0420 01:11:59.268316       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0420 01:11:59.617933       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0420 01:11:59.618047       1 policy_source.go:224] refreshing policies
	I0420 01:11:59.617944       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0420 01:11:59.620200       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0420 01:11:59.620225       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0420 01:11:59.620918       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0420 01:11:59.620952       1 shared_informer.go:320] Caches are synced for configmaps
	I0420 01:11:59.621127       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0420 01:11:59.621657       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0420 01:11:59.627487       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0420 01:11:59.668757       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0420 01:11:59.668806       1 aggregator.go:165] initial CRD sync complete...
	I0420 01:11:59.668814       1 autoregister_controller.go:141] Starting autoregister controller
	I0420 01:11:59.668820       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0420 01:11:59.668832       1 cache.go:39] Caches are synced for autoregister controller
	I0420 01:11:59.704154       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0420 01:12:00.265553       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0420 01:12:00.956798       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0420 01:12:00.958787       1 controller.go:615] quota admission added evaluator for: endpoints
	I0420 01:12:00.976291       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [de32529a9486b6d387c442cca7504c127ddaefc543c23d40df44ed75924d6d9f] <==
	Trace[821090921]: ---"About to write a response" 4055ms (01:11:18.862)
	Trace[821090921]: [4.056933261s] [4.056933261s] END
	I0420 01:11:19.526203       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0420 01:11:20.903485       1 shared_informer.go:320] Caches are synced for configmaps
	I0420 01:11:20.994468       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0420 01:11:20.994502       1 policy_source.go:224] refreshing policies
	I0420 01:11:21.003187       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0420 01:11:21.505244       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0420 01:11:21.505271       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0420 01:11:21.706712       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0420 01:11:21.712153       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0420 01:11:21.803237       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0420 01:11:21.803308       1 aggregator.go:165] initial CRD sync complete...
	I0420 01:11:21.803316       1 autoregister_controller.go:141] Starting autoregister controller
	I0420 01:11:21.803323       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0420 01:11:21.803328       1 cache.go:39] Caches are synced for autoregister controller
	I0420 01:11:21.897238       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0420 01:11:21.902153       1 trace.go:236] Trace[691163232]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:09bc5f02-3245-4d1a-8e58-f4577a61a5a8,client:::1,api-group:coordination.k8s.io,api-version:v1,name:apiserver-cpazt52tf2jw575fqpakrpxbgy,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-cpazt52tf2jw575fqpakrpxbgy,user-agent:kube-apiserver/v1.30.0 (linux/arm64) kubernetes/7c48c2b,verb:PUT (20-Apr-2024 01:11:19.267) (total time: 2634ms):
	Trace[691163232]: ["GuaranteedUpdate etcd3" audit-id:09bc5f02-3245-4d1a-8e58-f4577a61a5a8,key:/leases/kube-system/apiserver-cpazt52tf2jw575fqpakrpxbgy,type:*coordination.Lease,resource:leases.coordination.k8s.io 2634ms (01:11:19.267)
	Trace[691163232]:  ---"About to Encode" 2625ms (01:11:21.897)]
	Trace[691163232]: [2.634515635s] [2.634515635s] END
	I0420 01:11:21.903092       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0420 01:11:21.911703       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0420 01:11:21.989344       1 shared_informer.go:320] Caches are synced for node_authorizer
	F0420 01:11:56.502871       1 hooks.go:203] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [83f05ce1a100aee7a105457bd4c1d3515ae5bbb51532c61b082b362e118597a9] <==
	I0420 01:12:30.515791       1 shared_informer.go:320] Caches are synced for job
	I0420 01:12:30.518072       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0420 01:12:30.555203       1 shared_informer.go:320] Caches are synced for daemon sets
	I0420 01:12:30.589072       1 shared_informer.go:320] Caches are synced for deployment
	I0420 01:12:30.592354       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0420 01:12:30.592491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.761µs"
	I0420 01:12:30.592613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="105.368µs"
	I0420 01:12:30.597759       1 shared_informer.go:320] Caches are synced for disruption
	I0420 01:12:30.669388       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0420 01:12:30.684543       1 shared_informer.go:320] Caches are synced for resource quota
	I0420 01:12:30.684600       1 shared_informer.go:320] Caches are synced for resource quota
	I0420 01:12:31.113113       1 shared_informer.go:320] Caches are synced for garbage collector
	I0420 01:12:31.113168       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0420 01:12:31.144146       1 shared_informer.go:320] Caches are synced for garbage collector
	I0420 01:12:33.465935       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-159256-m04"
	I0420 01:12:33.627421       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-mcmjz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-mcmjz\": the object has been modified; please apply your changes to the latest version and try again"
	I0420 01:12:33.631573       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bacb70e4-4a2a-4e99-91b4-7e270ff0769f", APIVersion:"v1", ResourceVersion:"283", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-mcmjz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-mcmjz": the object has been modified; please apply your changes to the latest version and try again
	I0420 01:12:33.679914       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-mcmjz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-mcmjz\": the object has been modified; please apply your changes to the latest version and try again"
	I0420 01:12:33.680150       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bacb70e4-4a2a-4e99-91b4-7e270ff0769f", APIVersion:"v1", ResourceVersion:"283", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-mcmjz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-mcmjz": the object has been modified; please apply your changes to the latest version and try again
	I0420 01:12:33.698847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="164.099386ms"
	E0420 01:12:33.698895       1 replica_set.go:557] sync "kube-system/coredns-7db6d8ff4d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-7db6d8ff4d": the object has been modified; please apply your changes to the latest version and try again
	I0420 01:12:33.699035       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.988µs"
	I0420 01:12:33.705880       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="107.098µs"
	I0420 01:12:33.808988       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.459341ms"
	I0420 01:12:33.809156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.412µs"
	
	
	==> kube-controller-manager [9bce280a1f3bb32ef1f3151a5e0423f95408b600fbfe9377fc2e263dd820e84a] <==
	I0420 01:11:41.760562       1 serving.go:380] Generated self-signed cert in-memory
	I0420 01:11:42.613256       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0420 01:11:42.613284       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:11:42.614843       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0420 01:11:42.615013       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0420 01:11:42.615118       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0420 01:11:42.615241       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0420 01:11:52.632563       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-namespaces-controller
ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [6d4ace814ef4ed03505d34f54cc97c09207ef2261da3512c795838607bda0ab8] <==
	I0420 01:11:37.837284       1 server_linux.go:69] "Using iptables proxy"
	I0420 01:11:37.852168       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0420 01:11:37.989564       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0420 01:11:37.989733       1 server_linux.go:165] "Using iptables Proxier"
	I0420 01:11:37.992763       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0420 01:11:37.992885       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0420 01:11:37.992933       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0420 01:11:37.993196       1 server.go:872] "Version info" version="v1.30.0"
	I0420 01:11:37.993403       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0420 01:11:37.994657       1 config.go:192] "Starting service config controller"
	I0420 01:11:37.994723       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0420 01:11:37.994774       1 config.go:101] "Starting endpoint slice config controller"
	I0420 01:11:37.994802       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0420 01:11:37.995401       1 config.go:319] "Starting node config controller"
	I0420 01:11:37.995472       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0420 01:11:38.095635       1 shared_informer.go:320] Caches are synced for node config
	I0420 01:11:38.095672       1 shared_informer.go:320] Caches are synced for service config
	I0420 01:11:38.095707       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a78728ecb587271b47816aa9724dde1672e6dccb1d7fe737b658a8f347282470] <==
	E0420 01:11:18.342742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0420 01:11:18.371222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0420 01:11:18.371260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0420 01:11:18.736488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0420 01:11:18.736525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0420 01:11:18.817300       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0420 01:11:18.817341       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0420 01:11:19.381236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0420 01:11:19.381401       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0420 01:11:39.388676       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0420 01:11:59.456147       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:33266->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.457026       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:33154->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.457176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:33138->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.457291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:33252->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.457375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:33248->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.457455       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:33262->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.457643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:33238->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.458798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:33224->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.458888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:33216->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.458965       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:33202->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.459040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:33186->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.459121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:33178->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.459201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:33172->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.459274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:33170->192.168.49.2:8443: read: connection reset by peer
	E0420 01:11:59.459366       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:33160->192.168.49.2:8443: read: connection reset by peer
	
	
	==> kubelet <==
	Apr 20 01:11:30 ha-159256 kubelet[759]: I0420 01:11:30.403800     759 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 20 01:11:40 ha-159256 kubelet[759]: I0420 01:11:40.559676     759 scope.go:117] "RemoveContainer" containerID="06e9d185fc76820c8d4ebe4c742167ea8dce34de1ddf59e29df523c784efb574"
	Apr 20 01:11:52 ha-159256 kubelet[759]: I0420 01:11:52.780087     759 scope.go:117] "RemoveContainer" containerID="06e9d185fc76820c8d4ebe4c742167ea8dce34de1ddf59e29df523c784efb574"
	Apr 20 01:11:52 ha-159256 kubelet[759]: I0420 01:11:52.780375     759 scope.go:117] "RemoveContainer" containerID="9bce280a1f3bb32ef1f3151a5e0423f95408b600fbfe9377fc2e263dd820e84a"
	Apr 20 01:11:52 ha-159256 kubelet[759]: E0420 01:11:52.783828     759 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-159256_kube-system(9ef6fae9331a9a1c003d35fa15cf6660)\"" pod="kube-system/kube-controller-manager-ha-159256" podUID="9ef6fae9331a9a1c003d35fa15cf6660"
	Apr 20 01:11:56 ha-159256 kubelet[759]: I0420 01:11:56.789285     759 scope.go:117] "RemoveContainer" containerID="de32529a9486b6d387c442cca7504c127ddaefc543c23d40df44ed75924d6d9f"
	Apr 20 01:11:56 ha-159256 kubelet[759]: I0420 01:11:56.790380     759 status_manager.go:853] "Failed to get status for pod" podUID="4728ea1f835f896149e1e7c6a6231264" pod="kube-system/kube-apiserver-ha-159256" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-159256\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Apr 20 01:11:56 ha-159256 kubelet[759]: E0420 01:11:56.793141     759 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-159256.17c7d7cb040f02cf\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ha-159256.17c7d7cb040f02cf  kube-system   2587 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-159256,UID:4728ea1f835f896149e1e7c6a6231264,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.30.0\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-159256,},FirstTimestamp:2024-04-20 01:10:50 +0000 UTC,LastTimestamp:2024-04-20 01:11:56.792275813 +0000 UTC m=+73.438021395,Count:2,Type:Normal,EventTime:0001-01-01 00:00
:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-159256,}"
	Apr 20 01:11:59 ha-159256 kubelet[759]: E0420 01:11:59.395408     759 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:56938->192.168.49.254:8443: read: connection reset by peer
	Apr 20 01:11:59 ha-159256 kubelet[759]: E0420 01:11:59.395475     759 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:56890->192.168.49.254:8443: read: connection reset by peer
	Apr 20 01:11:59 ha-159256 kubelet[759]: E0420 01:11:59.395816     759 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:56928->192.168.49.254:8443: read: connection reset by peer
	Apr 20 01:11:59 ha-159256 kubelet[759]: E0420 01:11:59.396141     759 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:56906->192.168.49.254:8443: read: connection reset by peer
	Apr 20 01:12:00 ha-159256 kubelet[759]: I0420 01:12:00.798532     759 scope.go:117] "RemoveContainer" containerID="711e075432c6e9659bbc6fe5e11f79eb3b49e5ac63708bc08e385242b9ca8479"
	Apr 20 01:12:02 ha-159256 kubelet[759]: I0420 01:12:02.726824     759 scope.go:117] "RemoveContainer" containerID="9bce280a1f3bb32ef1f3151a5e0423f95408b600fbfe9377fc2e263dd820e84a"
	Apr 20 01:12:02 ha-159256 kubelet[759]: E0420 01:12:02.727320     759 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-159256_kube-system(9ef6fae9331a9a1c003d35fa15cf6660)\"" pod="kube-system/kube-controller-manager-ha-159256" podUID="9ef6fae9331a9a1c003d35fa15cf6660"
	Apr 20 01:12:02 ha-159256 kubelet[759]: I0420 01:12:02.804711     759 scope.go:117] "RemoveContainer" containerID="9bce280a1f3bb32ef1f3151a5e0423f95408b600fbfe9377fc2e263dd820e84a"
	Apr 20 01:12:02 ha-159256 kubelet[759]: E0420 01:12:02.805158     759 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-159256_kube-system(9ef6fae9331a9a1c003d35fa15cf6660)\"" pod="kube-system/kube-controller-manager-ha-159256" podUID="9ef6fae9331a9a1c003d35fa15cf6660"
	Apr 20 01:12:07 ha-159256 kubelet[759]: I0420 01:12:07.814973     759 scope.go:117] "RemoveContainer" containerID="cc9d580e4437aaafb4e9ad8429e10c05ff66366fadccd4fff9a5de22a1c77fbe"
	Apr 20 01:12:10 ha-159256 kubelet[759]: E0420 01:12:10.673481     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-159256?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 20 01:12:10 ha-159256 kubelet[759]: E0420 01:12:10.943351     759 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-159256\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-159256?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 20 01:12:17 ha-159256 kubelet[759]: I0420 01:12:17.559706     759 scope.go:117] "RemoveContainer" containerID="9bce280a1f3bb32ef1f3151a5e0423f95408b600fbfe9377fc2e263dd820e84a"
	Apr 20 01:12:20 ha-159256 kubelet[759]: E0420 01:12:20.673942     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-159256?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 20 01:12:20 ha-159256 kubelet[759]: E0420 01:12:20.944272     759 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-159256\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-159256?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 20 01:12:30 ha-159256 kubelet[759]: E0420 01:12:30.675267     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-159256?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 20 01:12:30 ha-159256 kubelet[759]: E0420 01:12:30.944920     759 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-159256\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-159256?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-159256 -n ha-159256
helpers_test.go:261: (dbg) Run:  kubectl --context ha-159256 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (123.58s)

                                                
                                    

Test pass (294/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.92
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.30.0/json-events 6.87
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.08
18 TestDownloadOnly/v1.30.0/DeleteAll 0.19
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 201.41
29 TestAddons/parallel/Registry 17.96
31 TestAddons/parallel/InspektorGadget 11.79
35 TestAddons/parallel/CSI 53.02
37 TestAddons/parallel/CloudSpanner 6.57
38 TestAddons/parallel/LocalPath 52.44
39 TestAddons/parallel/NvidiaDevicePlugin 5.56
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.21
44 TestAddons/StoppedEnableDisable 12.24
45 TestCertOptions 39.05
46 TestCertExpiration 248.33
48 TestForceSystemdFlag 41.6
49 TestForceSystemdEnv 41.17
55 TestErrorSpam/setup 32.34
56 TestErrorSpam/start 0.7
57 TestErrorSpam/status 1
58 TestErrorSpam/pause 1.8
59 TestErrorSpam/unpause 1.83
60 TestErrorSpam/stop 1.43
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 77.03
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 25.34
67 TestFunctional/serial/KubeContext 0.07
68 TestFunctional/serial/KubectlGetPods 0.11
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.76
72 TestFunctional/serial/CacheCmd/cache/add_local 1.09
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
74 TestFunctional/serial/CacheCmd/cache/list 0.07
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.95
77 TestFunctional/serial/CacheCmd/cache/delete 0.14
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
80 TestFunctional/serial/ExtraConfig 36.18
81 TestFunctional/serial/ComponentHealth 0.11
82 TestFunctional/serial/LogsCmd 1.66
83 TestFunctional/serial/LogsFileCmd 1.72
84 TestFunctional/serial/InvalidService 4.18
86 TestFunctional/parallel/ConfigCmd 0.57
87 TestFunctional/parallel/DashboardCmd 13.94
88 TestFunctional/parallel/DryRun 0.6
89 TestFunctional/parallel/InternationalLanguage 0.19
90 TestFunctional/parallel/StatusCmd 1.05
94 TestFunctional/parallel/ServiceCmdConnect 12.77
95 TestFunctional/parallel/AddonsCmd 0.25
96 TestFunctional/parallel/PersistentVolumeClaim 26.74
98 TestFunctional/parallel/SSHCmd 0.74
99 TestFunctional/parallel/CpCmd 2.05
101 TestFunctional/parallel/FileSync 0.35
102 TestFunctional/parallel/CertSync 2.09
106 TestFunctional/parallel/NodeLabels 0.12
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.83
110 TestFunctional/parallel/License 0.39
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.31
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 7.26
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
124 TestFunctional/parallel/ProfileCmd/profile_list 0.4
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
126 TestFunctional/parallel/MountCmd/any-port 7.47
127 TestFunctional/parallel/ServiceCmd/List 0.56
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
130 TestFunctional/parallel/ServiceCmd/Format 0.5
131 TestFunctional/parallel/ServiceCmd/URL 0.43
132 TestFunctional/parallel/MountCmd/specific-port 2.4
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.41
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 0.92
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.1
141 TestFunctional/parallel/ImageCommands/Setup 1.67
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.22
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.43
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.28
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.88
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.87
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.29
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.9
152 TestFunctional/delete_addon-resizer_images 0.08
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 166.22
159 TestMultiControlPlane/serial/DeployApp 7.39
160 TestMultiControlPlane/serial/PingHostFromPods 1.73
161 TestMultiControlPlane/serial/AddWorkerNode 54.79
162 TestMultiControlPlane/serial/NodeLabels 0.1
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.76
164 TestMultiControlPlane/serial/CopyFile 19.52
165 TestMultiControlPlane/serial/StopSecondaryNode 12.79
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
167 TestMultiControlPlane/serial/RestartSecondaryNode 43.13
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.8
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 196.22
170 TestMultiControlPlane/serial/DeleteSecondaryNode 12.93
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
172 TestMultiControlPlane/serial/StopCluster 35.97
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.58
175 TestMultiControlPlane/serial/AddSecondaryNode 63.66
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.76
180 TestJSONOutput/start/Command 76.37
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.75
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.67
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.97
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.23
205 TestKicCustomNetwork/create_custom_network 44.17
206 TestKicCustomNetwork/use_default_bridge_network 37.26
207 TestKicExistingNetwork 34.53
208 TestKicCustomSubnet 32.86
209 TestKicStaticIP 35.11
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 67.91
214 TestMountStart/serial/StartWithMountFirst 7.09
215 TestMountStart/serial/VerifyMountFirst 0.27
216 TestMountStart/serial/StartWithMountSecond 9.1
217 TestMountStart/serial/VerifyMountSecond 0.26
218 TestMountStart/serial/DeleteFirst 1.6
219 TestMountStart/serial/VerifyMountPostDelete 0.26
220 TestMountStart/serial/Stop 1.21
221 TestMountStart/serial/RestartStopped 7.76
222 TestMountStart/serial/VerifyMountPostStop 0.26
225 TestMultiNode/serial/FreshStart2Nodes 123.19
226 TestMultiNode/serial/DeployApp2Nodes 4.95
227 TestMultiNode/serial/PingHostFrom2Pods 1.05
228 TestMultiNode/serial/AddNode 47.56
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.34
231 TestMultiNode/serial/CopyFile 10.37
232 TestMultiNode/serial/StopNode 2.29
233 TestMultiNode/serial/StartAfterStop 9.89
234 TestMultiNode/serial/RestartKeepsNodes 81.3
235 TestMultiNode/serial/DeleteNode 5.27
236 TestMultiNode/serial/StopMultiNode 23.86
237 TestMultiNode/serial/RestartMultiNode 51.79
238 TestMultiNode/serial/ValidateNameConflict 36.55
243 TestPreload 117.66
245 TestScheduledStopUnix 108.17
248 TestInsufficientStorage 10.47
249 TestRunningBinaryUpgrade 73.92
251 TestKubernetesUpgrade 382.12
252 TestMissingContainerUpgrade 151.57
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 37.4
256 TestNoKubernetes/serial/StartWithStopK8s 9.04
257 TestNoKubernetes/serial/Start 7.23
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
259 TestNoKubernetes/serial/ProfileList 1.07
260 TestNoKubernetes/serial/Stop 1.27
261 TestNoKubernetes/serial/StartNoArgs 8.44
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.48
263 TestStoppedBinaryUpgrade/Setup 1.55
264 TestStoppedBinaryUpgrade/Upgrade 65.98
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
274 TestPause/serial/Start 74.34
275 TestPause/serial/SecondStartNoReconfiguration 36.35
276 TestPause/serial/Pause 1.07
277 TestPause/serial/VerifyStatus 0.44
278 TestPause/serial/Unpause 0.95
279 TestPause/serial/PauseAgain 1.22
280 TestPause/serial/DeletePaused 3.09
281 TestPause/serial/VerifyDeletedResources 0.51
289 TestNetworkPlugins/group/false 5.06
294 TestStartStop/group/old-k8s-version/serial/FirstStart 149.29
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.63
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.11
297 TestStartStop/group/old-k8s-version/serial/Stop 11.99
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
299 TestStartStop/group/old-k8s-version/serial/SecondStart 140.97
301 TestStartStop/group/embed-certs/serial/FirstStart 83.39
302 TestStartStop/group/embed-certs/serial/DeployApp 9.37
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
304 TestStartStop/group/embed-certs/serial/Stop 11.94
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
306 TestStartStop/group/embed-certs/serial/SecondStart 266.26
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
310 TestStartStop/group/old-k8s-version/serial/Pause 3.03
312 TestStartStop/group/no-preload/serial/FirstStart 66
313 TestStartStop/group/no-preload/serial/DeployApp 9.34
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
315 TestStartStop/group/no-preload/serial/Stop 12.01
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
317 TestStartStop/group/no-preload/serial/SecondStart 267.28
318 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
320 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
321 TestStartStop/group/embed-certs/serial/Pause 3.13
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 77.19
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.36
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.95
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.52
329 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.16
331 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
332 TestStartStop/group/no-preload/serial/Pause 4.72
334 TestStartStop/group/newest-cni/serial/FirstStart 47.01
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
337 TestStartStop/group/newest-cni/serial/Stop 1.3
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
339 TestStartStop/group/newest-cni/serial/SecondStart 16.61
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
343 TestStartStop/group/newest-cni/serial/Pause 2.84
344 TestNetworkPlugins/group/auto/Start 78.13
345 TestNetworkPlugins/group/auto/KubeletFlags 0.5
346 TestNetworkPlugins/group/auto/NetCatPod 11.3
347 TestNetworkPlugins/group/auto/DNS 0.19
348 TestNetworkPlugins/group/auto/Localhost 0.15
349 TestNetworkPlugins/group/auto/HairPin 0.16
350 TestNetworkPlugins/group/kindnet/Start 80.15
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
353 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
354 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.09
355 TestNetworkPlugins/group/calico/Start 77.35
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
358 TestNetworkPlugins/group/kindnet/NetCatPod 11.39
359 TestNetworkPlugins/group/kindnet/DNS 0.18
360 TestNetworkPlugins/group/kindnet/Localhost 0.2
361 TestNetworkPlugins/group/kindnet/HairPin 0.16
362 TestNetworkPlugins/group/custom-flannel/Start 67.59
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.39
365 TestNetworkPlugins/group/calico/NetCatPod 12.31
366 TestNetworkPlugins/group/calico/DNS 0.3
367 TestNetworkPlugins/group/calico/Localhost 0.24
368 TestNetworkPlugins/group/calico/HairPin 0.25
369 TestNetworkPlugins/group/enable-default-cni/Start 89.01
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.35
372 TestNetworkPlugins/group/custom-flannel/DNS 0.23
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
375 TestNetworkPlugins/group/flannel/Start 67.59
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.31
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
383 TestNetworkPlugins/group/flannel/NetCatPod 11.34
384 TestNetworkPlugins/group/bridge/Start 93.76
385 TestNetworkPlugins/group/flannel/DNS 0.38
386 TestNetworkPlugins/group/flannel/Localhost 0.35
387 TestNetworkPlugins/group/flannel/HairPin 0.21
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
389 TestNetworkPlugins/group/bridge/NetCatPod 11.26
390 TestNetworkPlugins/group/bridge/DNS 0.2
391 TestNetworkPlugins/group/bridge/Localhost 0.14
392 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (10.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-784633 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-784633 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.92049818s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-784633
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-784633: exit status 85 (80.861972ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-784633 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC |          |
	|         | -p download-only-784633        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 00:45:54
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 00:45:54.500472 1643629 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:45:54.500660 1643629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:45:54.500672 1643629 out.go:304] Setting ErrFile to fd 2...
	I0420 00:45:54.500678 1643629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:45:54.500940 1643629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	W0420 00:45:54.501080 1643629 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18703-1638187/.minikube/config/config.json: open /home/jenkins/minikube-integration/18703-1638187/.minikube/config/config.json: no such file or directory
	I0420 00:45:54.501606 1643629 out.go:298] Setting JSON to true
	I0420 00:45:54.502488 1643629 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":26901,"bootTime":1713547053,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0420 00:45:54.502563 1643629 start.go:139] virtualization:  
	I0420 00:45:54.505558 1643629 out.go:97] [download-only-784633] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	W0420 00:45:54.505726 1643629 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball: no such file or directory
	I0420 00:45:54.507662 1643629 out.go:169] MINIKUBE_LOCATION=18703
	I0420 00:45:54.505833 1643629 notify.go:220] Checking for updates...
	I0420 00:45:54.511605 1643629 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 00:45:54.513471 1643629 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 00:45:54.515414 1643629 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	I0420 00:45:54.517164 1643629 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0420 00:45:54.520317 1643629 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0420 00:45:54.520603 1643629 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 00:45:54.540987 1643629 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0420 00:45:54.541087 1643629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 00:45:54.603369 1643629 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-20 00:45:54.593979903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 00:45:54.603486 1643629 docker.go:295] overlay module found
	I0420 00:45:54.605346 1643629 out.go:97] Using the docker driver based on user configuration
	I0420 00:45:54.605372 1643629 start.go:297] selected driver: docker
	I0420 00:45:54.605378 1643629 start.go:901] validating driver "docker" against <nil>
	I0420 00:45:54.605482 1643629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 00:45:54.655535 1643629 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-20 00:45:54.646758667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 00:45:54.655702 1643629 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0420 00:45:54.655993 1643629 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0420 00:45:54.656151 1643629 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0420 00:45:54.658128 1643629 out.go:169] Using Docker driver with root privileges
	I0420 00:45:54.659925 1643629 cni.go:84] Creating CNI manager for ""
	I0420 00:45:54.659952 1643629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0420 00:45:54.659969 1643629 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0420 00:45:54.660045 1643629 start.go:340] cluster config:
	{Name:download-only-784633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-784633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:45:54.661877 1643629 out.go:97] Starting "download-only-784633" primary control-plane node in "download-only-784633" cluster
	I0420 00:45:54.661898 1643629 cache.go:121] Beginning downloading kic base image for docker with crio
	I0420 00:45:54.663406 1643629 out.go:97] Pulling base image v0.0.43 ...
	I0420 00:45:54.663430 1643629 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 00:45:54.663583 1643629 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local docker daemon
	I0420 00:45:54.677124 1643629 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 to local cache
	I0420 00:45:54.677303 1643629 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local cache directory
	I0420 00:45:54.677428 1643629 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 to local cache
	I0420 00:45:54.736825 1643629 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0420 00:45:54.736865 1643629 cache.go:56] Caching tarball of preloaded images
	I0420 00:45:54.737480 1643629 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 00:45:54.740092 1643629 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0420 00:45:54.740112 1643629 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0420 00:45:54.885266 1643629 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0420 00:46:00.306724 1643629 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 as a tarball
	I0420 00:46:01.637281 1643629 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0420 00:46:01.637386 1643629 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0420 00:46:02.740124 1643629 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0420 00:46:02.740484 1643629 profile.go:143] Saving config to /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/download-only-784633/config.json ...
	I0420 00:46:02.740517 1643629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/download-only-784633/config.json: {Name:mk85e7468d0010043d8b9696769271c9b0d6b0f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0420 00:46:02.740695 1643629 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0420 00:46:02.740908 1643629 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-784633 host does not exist
	  To start a cluster, run: "minikube start -p download-only-784633"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-784633
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (6.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-161385 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-161385 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.87208574s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (6.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-161385
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-161385: exit status 85 (77.102576ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-784633 | jenkins | v1.33.0 | 20 Apr 24 00:45 UTC |                     |
	|         | -p download-only-784633        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| delete  | -p download-only-784633        | download-only-784633 | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC | 20 Apr 24 00:46 UTC |
	| start   | -o=json --download-only        | download-only-161385 | jenkins | v1.33.0 | 20 Apr 24 00:46 UTC |                     |
	|         | -p download-only-161385        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/20 00:46:05
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0420 00:46:05.858058 1643796 out.go:291] Setting OutFile to fd 1 ...
	I0420 00:46:05.858195 1643796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:46:05.858203 1643796 out.go:304] Setting ErrFile to fd 2...
	I0420 00:46:05.858208 1643796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 00:46:05.858662 1643796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	I0420 00:46:05.859116 1643796 out.go:298] Setting JSON to true
	I0420 00:46:05.860133 1643796 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":26913,"bootTime":1713547053,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0420 00:46:05.860221 1643796 start.go:139] virtualization:  
	I0420 00:46:05.862675 1643796 out.go:97] [download-only-161385] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0420 00:46:05.864777 1643796 out.go:169] MINIKUBE_LOCATION=18703
	I0420 00:46:05.862857 1643796 notify.go:220] Checking for updates...
	I0420 00:46:05.868836 1643796 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 00:46:05.870761 1643796 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 00:46:05.872836 1643796 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	I0420 00:46:05.874539 1643796 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0420 00:46:05.878258 1643796 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0420 00:46:05.878539 1643796 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 00:46:05.900740 1643796 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0420 00:46:05.900848 1643796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 00:46:05.962492 1643796 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-20 00:46:05.953754176 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 00:46:05.962599 1643796 docker.go:295] overlay module found
	I0420 00:46:05.964950 1643796 out.go:97] Using the docker driver based on user configuration
	I0420 00:46:05.964976 1643796 start.go:297] selected driver: docker
	I0420 00:46:05.964983 1643796 start.go:901] validating driver "docker" against <nil>
	I0420 00:46:05.965141 1643796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 00:46:06.018718 1643796 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-20 00:46:06.009323535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 00:46:06.018898 1643796 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0420 00:46:06.019165 1643796 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0420 00:46:06.019345 1643796 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0420 00:46:06.021695 1643796 out.go:169] Using Docker driver with root privileges
	I0420 00:46:06.023459 1643796 cni.go:84] Creating CNI manager for ""
	I0420 00:46:06.023489 1643796 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0420 00:46:06.023498 1643796 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0420 00:46:06.023582 1643796 start.go:340] cluster config:
	{Name:download-only-161385 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-161385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 00:46:06.025447 1643796 out.go:97] Starting "download-only-161385" primary control-plane node in "download-only-161385" cluster
	I0420 00:46:06.025478 1643796 cache.go:121] Beginning downloading kic base image for docker with crio
	I0420 00:46:06.027478 1643796 out.go:97] Pulling base image v0.0.43 ...
	I0420 00:46:06.027513 1643796 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:46:06.027610 1643796 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local docker daemon
	I0420 00:46:06.041836 1643796 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 to local cache
	I0420 00:46:06.041981 1643796 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local cache directory
	I0420 00:46:06.042007 1643796 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 in local cache directory, skipping pull
	I0420 00:46:06.042016 1643796 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 exists in cache, skipping pull
	I0420 00:46:06.042025 1643796 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 as a tarball
	I0420 00:46:06.096024 1643796 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0420 00:46:06.096070 1643796 cache.go:56] Caching tarball of preloaded images
	I0420 00:46:06.096247 1643796 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0420 00:46:06.098349 1643796 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0420 00:46:06.098371 1643796 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 ...
	I0420 00:46:06.203786 1643796 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:0b6b385f66a101b8e819a9a918236667 -> /home/jenkins/minikube-integration/18703-1638187/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-161385 host does not exist
	  To start a cluster, run: "minikube start -p download-only-161385"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-161385
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-562090 --alsologtostderr --binary-mirror http://127.0.0.1:39787 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-562090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-562090
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-747503
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-747503: exit status 85 (86.10146ms)

                                                
                                                
-- stdout --
	* Profile "addons-747503" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-747503"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-747503
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-747503: exit status 85 (85.877221ms)

                                                
                                                
-- stdout --
	* Profile "addons-747503" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-747503"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (201.41s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-747503 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-747503 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m21.405307772s)
--- PASS: TestAddons/Setup (201.41s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 48.252669ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-sx6fv" [c3fda03d-8cd2-4cff-9835-e17c079b7e05] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005606839s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5c8mf" [78326941-b968-43a4-865c-3f7c843b92c7] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005125812s
addons_test.go:340: (dbg) Run:  kubectl --context addons-747503 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-747503 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-747503 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.368596159s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 ip
2024/04/20 00:49:52 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-arm64 -p addons-747503 addons disable registry --alsologtostderr -v=1: (1.100966937s)
--- PASS: TestAddons/parallel/Registry (17.96s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-j48lz" [1c6fda8f-82c7-43ad-8c7d-11de076291e3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00443244s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-747503
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-747503: (5.782373407s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 56.361584ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-747503 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-747503 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a986d1cc-3c51-4cfe-9ae5-04685e5f6875] Pending
helpers_test.go:344: "task-pv-pod" [a986d1cc-3c51-4cfe-9ae5-04685e5f6875] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a986d1cc-3c51-4cfe-9ae5-04685e5f6875] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003955583s
addons_test.go:584: (dbg) Run:  kubectl --context addons-747503 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-747503 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-747503 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-747503 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-747503 delete pod task-pv-pod: (1.010753517s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-747503 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-747503 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-747503 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [93a9da43-f408-49f1-aa60-b84dcd3ebe4e] Pending
helpers_test.go:344: "task-pv-pod-restore" [93a9da43-f408-49f1-aa60-b84dcd3ebe4e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [93a9da43-f408-49f1-aa60-b84dcd3ebe4e] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003667039s
addons_test.go:626: (dbg) Run:  kubectl --context addons-747503 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-747503 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-747503 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-747503 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.708539172s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-7lmgv" [48f65c56-a870-4d8d-b6d5-9e8070d92042] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003988972s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-747503
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.44s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-747503 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-747503 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-747503 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [315401f9-9cb1-45d4-9a06-f57ef5f83735] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [315401f9-9cb1-45d4-9a06-f57ef5f83735] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [315401f9-9cb1-45d4-9a06-f57ef5f83735] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.007570399s
addons_test.go:891: (dbg) Run:  kubectl --context addons-747503 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 ssh "cat /opt/local-path-provisioner/pvc-b29b3cd7-c850-4a4e-b0ba-8a8cc403a41d_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-747503 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-747503 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-747503 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-747503 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.268403843s)
--- PASS: TestAddons/parallel/LocalPath (52.44s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8wcvh" [1dc1e685-c035-4a95-99c7-d40ef680694c] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004879421s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-747503
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-q5cff" [d045e044-f8a8-4eca-883f-f5fca90a4703] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006084421s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-747503 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-747503 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-747503
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-747503: (11.955094997s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-747503
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-747503
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-747503
--- PASS: TestAddons/StoppedEnableDisable (12.24s)

                                                
                                    
x
+
TestCertOptions (39.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-938242 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-938242 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.349573172s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-938242 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-938242 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-938242 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-938242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-938242
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-938242: (1.984842448s)
--- PASS: TestCertOptions (39.05s)

                                                
                                    
x
+
TestCertExpiration (248.33s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-668257 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-668257 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.768562681s)
E0420 01:39:35.965629 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-668257 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-668257 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.1633858s)
helpers_test.go:175: Cleaning up "cert-expiration-668257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-668257
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-668257: (2.397462207s)
--- PASS: TestCertExpiration (248.33s)

                                                
                                    
x
+
TestForceSystemdFlag (41.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-796769 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-796769 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.804933898s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-796769 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-796769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-796769
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-796769: (2.42068069s)
--- PASS: TestForceSystemdFlag (41.60s)

                                                
                                    
x
+
TestForceSystemdEnv (41.17s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-306646 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-306646 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.388058222s)
helpers_test.go:175: Cleaning up "force-systemd-env-306646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-306646
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-306646: (2.778308807s)
--- PASS: TestForceSystemdEnv (41.17s)

                                                
                                    
x
+
TestErrorSpam/setup (32.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-402006 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-402006 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-402006 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-402006 --driver=docker  --container-runtime=crio: (32.341216953s)
--- PASS: TestErrorSpam/setup (32.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 pause
--- PASS: TestErrorSpam/pause (1.80s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 stop: (1.228628629s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-402006 --log_dir /tmp/nospam-402006 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18703-1638187/.minikube/files/etc/test/nested/copy/1643623/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-756660 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-756660 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.027574568s)
--- PASS: TestFunctional/serial/StartWithProxy (77.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-756660 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-756660 --alsologtostderr -v=8: (25.335420771s)
functional_test.go:659: soft start took 25.335978908s for "functional-756660" cluster.
--- PASS: TestFunctional/serial/SoftStart (25.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-756660 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-756660 cache add registry.k8s.io/pause:3.1: (1.280729502s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-756660 cache add registry.k8s.io/pause:3.3: (1.288564656s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-756660 cache add registry.k8s.io/pause:latest: (1.188663124s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-756660 /tmp/TestFunctionalserialCacheCmdcacheadd_local2612685043/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 cache add minikube-local-cache-test:functional-756660
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 cache delete minikube-local-cache-test:functional-756660
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-756660
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-756660 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (305.667102ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 kubectl -- --context functional-756660 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-756660 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-756660 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0420 00:59:35.965833 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 00:59:35.971644 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 00:59:35.981955 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 00:59:36.002435 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 00:59:36.042658 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 00:59:36.122917 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 00:59:36.283262 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 00:59:36.603777 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 00:59:37.244625 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 00:59:38.525461 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 00:59:41.085674 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 00:59:46.205870 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 00:59:56.446073 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-756660 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.17800948s)
functional_test.go:757: restart took 36.178123544s for "functional-756660" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-756660 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-756660 logs: (1.659648784s)
--- PASS: TestFunctional/serial/LogsCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 logs --file /tmp/TestFunctionalserialLogsFileCmd3620038910/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-756660 logs --file /tmp/TestFunctionalserialLogsFileCmd3620038910/001/logs.txt: (1.716182753s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-756660 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-756660
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-756660: exit status 115 (398.243804ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31648 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-756660 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-756660 config get cpus: exit status 14 (106.741372ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-756660 config get cpus: exit status 14 (98.46903ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-756660 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-756660 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1670100: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.94s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-756660 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-756660 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (220.772596ms)

                                                
                                                
-- stdout --
	* [functional-756660] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:00:48.036583 1669601 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:00:48.036732 1669601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:00:48.036758 1669601 out.go:304] Setting ErrFile to fd 2...
	I0420 01:00:48.036781 1669601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:00:48.037063 1669601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	I0420 01:00:48.037482 1669601 out.go:298] Setting JSON to false
	I0420 01:00:48.038487 1669601 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27795,"bootTime":1713547053,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0420 01:00:48.038570 1669601 start.go:139] virtualization:  
	I0420 01:00:48.041483 1669601 out.go:177] * [functional-756660] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0420 01:00:48.044932 1669601 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:00:48.044971 1669601 notify.go:220] Checking for updates...
	I0420 01:00:48.047552 1669601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:00:48.050329 1669601 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 01:00:48.053079 1669601 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	I0420 01:00:48.055638 1669601 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0420 01:00:48.058178 1669601 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:00:48.061210 1669601 config.go:182] Loaded profile config "functional-756660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:00:48.061771 1669601 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:00:48.081108 1669601 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0420 01:00:48.081221 1669601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 01:00:48.169588 1669601 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-20 01:00:48.160402737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 01:00:48.169706 1669601 docker.go:295] overlay module found
	I0420 01:00:48.173164 1669601 out.go:177] * Using the docker driver based on existing profile
	I0420 01:00:48.175703 1669601 start.go:297] selected driver: docker
	I0420 01:00:48.175740 1669601 start.go:901] validating driver "docker" against &{Name:functional-756660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-756660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:00:48.175849 1669601 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:00:48.178850 1669601 out.go:177] 
	W0420 01:00:48.181325 1669601 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0420 01:00:48.183926 1669601 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-756660 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-756660 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-756660 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (191.466997ms)

                                                
                                                
-- stdout --
	* [functional-756660] minikube v1.33.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:00:47.839597 1669562 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:00:47.839874 1669562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:00:47.839889 1669562 out.go:304] Setting ErrFile to fd 2...
	I0420 01:00:47.839895 1669562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:00:47.840281 1669562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	I0420 01:00:47.840706 1669562 out.go:298] Setting JSON to false
	I0420 01:00:47.841771 1669562 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27795,"bootTime":1713547053,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0420 01:00:47.841946 1669562 start.go:139] virtualization:  
	I0420 01:00:47.845388 1669562 out.go:177] * [functional-756660] minikube v1.33.0 sur Ubuntu 20.04 (arm64)
	I0420 01:00:47.847793 1669562 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:00:47.847832 1669562 notify.go:220] Checking for updates...
	I0420 01:00:47.851748 1669562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:00:47.853989 1669562 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 01:00:47.855983 1669562 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	I0420 01:00:47.857941 1669562 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0420 01:00:47.859717 1669562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:00:47.862098 1669562 config.go:182] Loaded profile config "functional-756660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:00:47.862648 1669562 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:00:47.882671 1669562 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0420 01:00:47.882782 1669562 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 01:00:47.951525 1669562 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-20 01:00:47.938998175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 01:00:47.951644 1669562 docker.go:295] overlay module found
	I0420 01:00:47.954159 1669562 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0420 01:00:47.956436 1669562 start.go:297] selected driver: docker
	I0420 01:00:47.956457 1669562 start.go:901] validating driver "docker" against &{Name:functional-756660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43@sha256:7ff490df401cc0fbf19a4521544ae8f4a00cc163e92a95017a8d8bfdb1422737 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-756660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0420 01:00:47.956569 1669562 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:00:47.959540 1669562 out.go:177] 
	W0420 01:00:47.961942 1669562 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0420 01:00:47.964192 1669562 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-756660 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-756660 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-s98x8" [ed99fbf3-1e2d-49bf-994c-21c6e584def7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-s98x8" [ed99fbf3-1e2d-49bf-994c-21c6e584def7] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003684785s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30840
functional_test.go:1671: http://192.168.49.2:30840: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-s98x8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30840
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f81fdb9f-7575-4942-8580-a611a0c727bf] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00414713s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-756660 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-756660 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-756660 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-756660 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [73a9df3f-9050-466f-9811-e07fa9301b6d] Pending
helpers_test.go:344: "sp-pod" [73a9df3f-9050-466f-9811-e07fa9301b6d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [73a9df3f-9050-466f-9811-e07fa9301b6d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.0037248s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-756660 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-756660 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-756660 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [48fd3f14-f89a-4a0e-94d7-698f08fccaa4] Pending
helpers_test.go:344: "sp-pod" [48fd3f14-f89a-4a0e-94d7-698f08fccaa4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004227021s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-756660 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh -n functional-756660 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 cp functional-756660:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd545603301/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh -n functional-756660 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh -n functional-756660 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1643623/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "sudo cat /etc/test/nested/copy/1643623/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1643623.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "sudo cat /etc/ssl/certs/1643623.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1643623.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "sudo cat /usr/share/ca-certificates/1643623.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/16436232.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "sudo cat /etc/ssl/certs/16436232.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/16436232.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "sudo cat /usr/share/ca-certificates/16436232.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-756660 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-756660 ssh "sudo systemctl is-active docker": exit status 1 (406.052215ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-756660 ssh "sudo systemctl is-active containerd": exit status 1 (421.20836ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-756660 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-756660 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-756660 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1667705: os: process already finished
helpers_test.go:502: unable to terminate pid 1667582: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-756660 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-756660 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-756660 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5f9ea2bd-1f83-4e26-aa23-05b79335c5ea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0420 01:00:16.927022 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
helpers_test.go:344: "nginx-svc" [5f9ea2bd-1f83-4e26-aa23-05b79335c5ea] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004597149s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-756660 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.84.137 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-756660 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-756660 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-756660 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-d4frj" [4385ba4c-6850-4fec-99fe-e000bd64e4b4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-d4frj" [4385ba4c-6850-4fec-99fe-e000bd64e4b4] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003960958s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "324.583683ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "75.033481ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "346.039444ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "74.918309ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-756660 /tmp/TestFunctionalparallelMountCmdany-port211927865/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713574842008492344" to /tmp/TestFunctionalparallelMountCmdany-port211927865/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713574842008492344" to /tmp/TestFunctionalparallelMountCmdany-port211927865/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713574842008492344" to /tmp/TestFunctionalparallelMountCmdany-port211927865/001/test-1713574842008492344
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-756660 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.305832ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 20 01:00 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 20 01:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 20 01:00 test-1713574842008492344
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh cat /mount-9p/test-1713574842008492344
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-756660 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5cac8a0d-f6d1-4b7c-b92f-d09e7011add8] Pending
helpers_test.go:344: "busybox-mount" [5cac8a0d-f6d1-4b7c-b92f-d09e7011add8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5cac8a0d-f6d1-4b7c-b92f-d09e7011add8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5cac8a0d-f6d1-4b7c-b92f-d09e7011add8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004353654s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-756660 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-756660 /tmp/TestFunctionalparallelMountCmdany-port211927865/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 service list -o json
functional_test.go:1490: Took "539.431366ms" to run "out/minikube-linux-arm64 -p functional-756660 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30966
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30966
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-756660 /tmp/TestFunctionalparallelMountCmdspecific-port2758156703/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-756660 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (529.791854ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-756660 /tmp/TestFunctionalparallelMountCmdspecific-port2758156703/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-756660 ssh "sudo umount -f /mount-9p": exit status 1 (323.734169ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-756660 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-756660 /tmp/TestFunctionalparallelMountCmdspecific-port2758156703/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-756660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1439745217/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-756660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1439745217/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-756660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1439745217/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-756660 ssh "findmnt -T" /mount1: exit status 1 (895.211291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-756660 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-756660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1439745217/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-756660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1439745217/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-756660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1439745217/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-756660 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-756660
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-756660 image ls --format short --alsologtostderr:
I0420 01:01:16.407885 1672429 out.go:291] Setting OutFile to fd 1 ...
I0420 01:01:16.408078 1672429 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 01:01:16.408089 1672429 out.go:304] Setting ErrFile to fd 2...
I0420 01:01:16.408095 1672429 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 01:01:16.408359 1672429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
I0420 01:01:16.408967 1672429 config.go:182] Loaded profile config "functional-756660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 01:01:16.409086 1672429 config.go:182] Loaded profile config "functional-756660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 01:01:16.409579 1672429 cli_runner.go:164] Run: docker container inspect functional-756660 --format={{.State.Status}}
I0420 01:01:16.425784 1672429 ssh_runner.go:195] Run: systemctl --version
I0420 01:01:16.425856 1672429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-756660
I0420 01:01:16.443702 1672429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34685 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/functional-756660/id_rsa Username:docker}
I0420 01:01:16.542926 1672429 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-756660 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| gcr.io/google-containers/addon-resizer  | functional-756660  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4740c1948d3fc | 60.9MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| docker.io/library/nginx                 | alpine             | 8f49f2e379605 | 51.5MB |
| registry.k8s.io/kube-proxy              | v1.30.0            | cb7eac0b42cc1 | 89.1MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/kube-apiserver          | v1.30.0            | 181f57fd3cdb7 | 114MB  |
| registry.k8s.io/kube-controller-manager | v1.30.0            | 68feac521c0f1 | 108MB  |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 547adae34140b | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | latest             | a6ac09e4d8a90 | 197MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-756660 image ls --format table --alsologtostderr:
I0420 01:01:17.005739 1672563 out.go:291] Setting OutFile to fd 1 ...
I0420 01:01:17.005938 1672563 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 01:01:17.005944 1672563 out.go:304] Setting ErrFile to fd 2...
I0420 01:01:17.005949 1672563 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 01:01:17.006234 1672563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
I0420 01:01:17.006965 1672563 config.go:182] Loaded profile config "functional-756660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 01:01:17.007098 1672563 config.go:182] Loaded profile config "functional-756660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 01:01:17.007612 1672563 cli_runner.go:164] Run: docker container inspect functional-756660 --format={{.State.Status}}
I0420 01:01:17.026001 1672563 ssh_runner.go:195] Run: systemctl --version
I0420 01:01:17.026063 1672563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-756660
I0420 01:01:17.057666 1672563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34685 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/functional-756660/id_rsa Username:docker}
I0420 01:01:17.162055 1672563 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-756660 image ls --format json --alsologtostderr:
[{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb","repoDigests":["registry.k8s.io/kube-apiserver@sha256:603450584095e9beb21ab73002fcd49b6e10f6b0194f1e64cca2e3cffa13123e","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a47
5ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"113538528"},{"id":"68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","registry.k8s.io/kube-controller-manager@sha256:63e991c4fc8bdc8fce68c183d152ba3ab560dc0a9b71ff97332a74a7605bbd3f"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"108229958"},{"id":"8f49f2e3796058c0b6568d610301043df2a2e84c72822ed0e2efdbcc4b653edc","repoDigests":["docker.io/library/nginx@sha256:7bd88800d8c18d4f73feeee25e04fcdbeecfc5e0a2b7254a90f4816bb67beadd","docker.io/library/nginx@sha256:a07ebd3327070119b352a40a59fc67c0a40ed9bca13508bfce06f9d8b9ec4000"],"repoTags":["docker.io/library/nginx:alpine"],"size":"51539655"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf
654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-756660"],"size":"34114467"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"60940831"},{"id":"a6ac09e4d8a90af2fac86bcd7508777bee5261c602b5ad90b5869925a021ad12","repoDigests":["docker.io/library/nginx@sha256:0463a96ac74b84a8a1b2
7f3d1f4ae5d1a70ea823219394e131f5bf3536674419","docker.io/library/nginx@sha256:50376dc014ca05120de7018b80cbe5b9246e057e8eec26defd40a172f6d8ab55"],"repoTags":["docker.io/library/nginx:latest"],"size":"196976458"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f","repoDigests":["registry.k8s.io/kube-proxy@sha256:a744a3a6db8ed022077d83357b93766fc252bcf01c572b3c3687c80e1e5faa55","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"89133975"},{"id":"547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0e04e710e758152f5f46761588d3e712c5b836839443b9c2c2d45ee511b803e
9","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"61568326"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e
9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kube
rnetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-756660 image ls --format json --alsologtostderr:
I0420 01:01:16.709427 1672489 out.go:291] Setting OutFile to fd 1 ...
I0420 01:01:16.709572 1672489 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 01:01:16.709581 1672489 out.go:304] Setting ErrFile to fd 2...
I0420 01:01:16.709587 1672489 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 01:01:16.709938 1672489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
I0420 01:01:16.710929 1672489 config.go:182] Loaded profile config "functional-756660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 01:01:16.711084 1672489 config.go:182] Loaded profile config "functional-756660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 01:01:16.711813 1672489 cli_runner.go:164] Run: docker container inspect functional-756660 --format={{.State.Status}}
I0420 01:01:16.731309 1672489 ssh_runner.go:195] Run: systemctl --version
I0420 01:01:16.731369 1672489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-756660
I0420 01:01:16.759791 1672489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34685 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/functional-756660/id_rsa Username:docker}
I0420 01:01:16.857766 1672489 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-756660 image ls --format yaml --alsologtostderr:
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:603450584095e9beb21ab73002fcd49b6e10f6b0194f1e64cca2e3cffa13123e
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "113538528"
- id: 68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:63e991c4fc8bdc8fce68c183d152ba3ab560dc0a9b71ff97332a74a7605bbd3f
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "108229958"
- id: cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f
repoDigests:
- registry.k8s.io/kube-proxy@sha256:a744a3a6db8ed022077d83357b93766fc252bcf01c572b3c3687c80e1e5faa55
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "89133975"
- id: a6ac09e4d8a90af2fac86bcd7508777bee5261c602b5ad90b5869925a021ad12
repoDigests:
- docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419
- docker.io/library/nginx@sha256:50376dc014ca05120de7018b80cbe5b9246e057e8eec26defd40a172f6d8ab55
repoTags:
- docker.io/library/nginx:latest
size: "196976458"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-756660
size: "34114467"
- id: 8f49f2e3796058c0b6568d610301043df2a2e84c72822ed0e2efdbcc4b653edc
repoDigests:
- docker.io/library/nginx@sha256:7bd88800d8c18d4f73feeee25e04fcdbeecfc5e0a2b7254a90f4816bb67beadd
- docker.io/library/nginx@sha256:a07ebd3327070119b352a40a59fc67c0a40ed9bca13508bfce06f9d8b9ec4000
repoTags:
- docker.io/library/nginx:alpine
size: "51539655"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0e04e710e758152f5f46761588d3e712c5b836839443b9c2c2d45ee511b803e9
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "61568326"
- id: 4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "60940831"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-756660 image ls --format yaml --alsologtostderr:
I0420 01:01:16.409174 1672430 out.go:291] Setting OutFile to fd 1 ...
I0420 01:01:16.409354 1672430 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 01:01:16.409381 1672430 out.go:304] Setting ErrFile to fd 2...
I0420 01:01:16.409400 1672430 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 01:01:16.409747 1672430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
I0420 01:01:16.410444 1672430 config.go:182] Loaded profile config "functional-756660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 01:01:16.410630 1672430 config.go:182] Loaded profile config "functional-756660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 01:01:16.411122 1672430 cli_runner.go:164] Run: docker container inspect functional-756660 --format={{.State.Status}}
I0420 01:01:16.426585 1672430 ssh_runner.go:195] Run: systemctl --version
I0420 01:01:16.426647 1672430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-756660
I0420 01:01:16.449677 1672430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34685 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/functional-756660/id_rsa Username:docker}
I0420 01:01:16.554324 1672430 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-756660 ssh pgrep buildkitd: exit status 1 (332.75673ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image build -t localhost/my-image:functional-756660 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-756660 image build -t localhost/my-image:functional-756660 testdata/build --alsologtostderr: (2.50852125s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-756660 image build -t localhost/my-image:functional-756660 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a22dba5e99b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-756660
--> fcf9b56391e
Successfully tagged localhost/my-image:functional-756660
fcf9b56391ee8449f8e5c8d56cc07d87bd4f891fc790bd1d6b429ccf067e5dc1
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-756660 image build -t localhost/my-image:functional-756660 testdata/build --alsologtostderr:
I0420 01:01:17.020241 1672568 out.go:291] Setting OutFile to fd 1 ...
I0420 01:01:17.021399 1672568 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 01:01:17.021461 1672568 out.go:304] Setting ErrFile to fd 2...
I0420 01:01:17.021482 1672568 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0420 01:01:17.021813 1672568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
I0420 01:01:17.022538 1672568 config.go:182] Loaded profile config "functional-756660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 01:01:17.023363 1672568 config.go:182] Loaded profile config "functional-756660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0420 01:01:17.024103 1672568 cli_runner.go:164] Run: docker container inspect functional-756660 --format={{.State.Status}}
I0420 01:01:17.050197 1672568 ssh_runner.go:195] Run: systemctl --version
I0420 01:01:17.050248 1672568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-756660
I0420 01:01:17.071039 1672568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34685 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/functional-756660/id_rsa Username:docker}
I0420 01:01:17.189290 1672568 build_images.go:161] Building image from path: /tmp/build.3785059799.tar
I0420 01:01:17.189374 1672568 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0420 01:01:17.212658 1672568 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3785059799.tar
I0420 01:01:17.220750 1672568 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3785059799.tar: stat -c "%s %y" /var/lib/minikube/build/build.3785059799.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3785059799.tar': No such file or directory
I0420 01:01:17.220775 1672568 ssh_runner.go:362] scp /tmp/build.3785059799.tar --> /var/lib/minikube/build/build.3785059799.tar (3072 bytes)
I0420 01:01:17.247165 1672568 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3785059799
I0420 01:01:17.256150 1672568 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3785059799 -xf /var/lib/minikube/build/build.3785059799.tar
I0420 01:01:17.265475 1672568 crio.go:315] Building image: /var/lib/minikube/build/build.3785059799
I0420 01:01:17.265563 1672568 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-756660 /var/lib/minikube/build/build.3785059799 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0420 01:01:19.420391 1672568 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-756660 /var/lib/minikube/build/build.3785059799 --cgroup-manager=cgroupfs: (2.154805794s)
I0420 01:01:19.420456 1672568 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3785059799
I0420 01:01:19.429107 1672568 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3785059799.tar
I0420 01:01:19.437647 1672568 build_images.go:217] Built localhost/my-image:functional-756660 from /tmp/build.3785059799.tar
I0420 01:01:19.437677 1672568 build_images.go:133] succeeded building to: functional-756660
I0420 01:01:19.437682 1672568 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.640767066s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-756660
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image load --daemon gcr.io/google-containers/addon-resizer:functional-756660 --alsologtostderr
E0420 01:00:57.887526 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
2024/04/20 01:01:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-756660 image load --daemon gcr.io/google-containers/addon-resizer:functional-756660 --alsologtostderr: (5.911898593s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image load --daemon gcr.io/google-containers/addon-resizer:functional-756660 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-756660 image load --daemon gcr.io/google-containers/addon-resizer:functional-756660 --alsologtostderr: (3.107309016s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.872554548s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-756660
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image load --daemon gcr.io/google-containers/addon-resizer:functional-756660 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-756660 image load --daemon gcr.io/google-containers/addon-resizer:functional-756660 --alsologtostderr: (3.715489724s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image save gcr.io/google-containers/addon-resizer:functional-756660 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image rm gcr.io/google-containers/addon-resizer:functional-756660 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-756660 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.043294012s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-756660
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-756660 image save --daemon gcr.io/google-containers/addon-resizer:functional-756660 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-756660
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-756660
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-756660
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-756660
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (166.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-159256 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0420 01:02:19.807959 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-159256 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m45.379900446s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (166.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-159256 -- rollout status deployment/busybox: (4.434452157s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-57n5m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-qt4wx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-z9cvl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-57n5m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-qt4wx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-z9cvl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-57n5m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-qt4wx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-z9cvl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-57n5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-57n5m -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-qt4wx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-qt4wx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-z9cvl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-159256 -- exec busybox-fc5497c4f-z9cvl -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-159256 -v=7 --alsologtostderr
E0420 01:04:35.965495 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 01:05:03.649073 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-159256 -v=7 --alsologtostderr: (53.804821996s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-159256 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 status --output json -v=7 --alsologtostderr
E0420 01:05:14.032891 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
E0420 01:05:14.038127 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
E0420 01:05:14.049039 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
E0420 01:05:14.069445 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
E0420 01:05:14.109712 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
E0420 01:05:14.189990 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
E0420 01:05:14.350862 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp testdata/cp-test.txt ha-159256:/home/docker/cp-test.txt
E0420 01:05:14.671001 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3336646767/001/cp-test_ha-159256.txt
E0420 01:05:15.311184 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256:/home/docker/cp-test.txt ha-159256-m02:/home/docker/cp-test_ha-159256_ha-159256-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256 "sudo cat /home/docker/cp-test.txt"
E0420 01:05:16.591617 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m02 "sudo cat /home/docker/cp-test_ha-159256_ha-159256-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256:/home/docker/cp-test.txt ha-159256-m03:/home/docker/cp-test_ha-159256_ha-159256-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m03 "sudo cat /home/docker/cp-test_ha-159256_ha-159256-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256:/home/docker/cp-test.txt ha-159256-m04:/home/docker/cp-test_ha-159256_ha-159256-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m04 "sudo cat /home/docker/cp-test_ha-159256_ha-159256-m04.txt"
E0420 01:05:19.156572 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp testdata/cp-test.txt ha-159256-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3336646767/001/cp-test_ha-159256-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256-m02:/home/docker/cp-test.txt ha-159256:/home/docker/cp-test_ha-159256-m02_ha-159256.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256 "sudo cat /home/docker/cp-test_ha-159256-m02_ha-159256.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256-m02:/home/docker/cp-test.txt ha-159256-m03:/home/docker/cp-test_ha-159256-m02_ha-159256-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m03 "sudo cat /home/docker/cp-test_ha-159256-m02_ha-159256-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256-m02:/home/docker/cp-test.txt ha-159256-m04:/home/docker/cp-test_ha-159256-m02_ha-159256-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m04 "sudo cat /home/docker/cp-test_ha-159256-m02_ha-159256-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp testdata/cp-test.txt ha-159256-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m03 "sudo cat /home/docker/cp-test.txt"
E0420 01:05:24.277486 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3336646767/001/cp-test_ha-159256-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256-m03:/home/docker/cp-test.txt ha-159256:/home/docker/cp-test_ha-159256-m03_ha-159256.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256 "sudo cat /home/docker/cp-test_ha-159256-m03_ha-159256.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256-m03:/home/docker/cp-test.txt ha-159256-m02:/home/docker/cp-test_ha-159256-m03_ha-159256-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m02 "sudo cat /home/docker/cp-test_ha-159256-m03_ha-159256-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256-m03:/home/docker/cp-test.txt ha-159256-m04:/home/docker/cp-test_ha-159256-m03_ha-159256-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m04 "sudo cat /home/docker/cp-test_ha-159256-m03_ha-159256-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp testdata/cp-test.txt ha-159256-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3336646767/001/cp-test_ha-159256-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256-m04:/home/docker/cp-test.txt ha-159256:/home/docker/cp-test_ha-159256-m04_ha-159256.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256 "sudo cat /home/docker/cp-test_ha-159256-m04_ha-159256.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256-m04:/home/docker/cp-test.txt ha-159256-m02:/home/docker/cp-test_ha-159256-m04_ha-159256-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m02 "sudo cat /home/docker/cp-test_ha-159256-m04_ha-159256-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 cp ha-159256-m04:/home/docker/cp-test.txt ha-159256-m03:/home/docker/cp-test_ha-159256-m04_ha-159256-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 ssh -n ha-159256-m03 "sudo cat /home/docker/cp-test_ha-159256-m04_ha-159256-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 node stop m02 -v=7 --alsologtostderr
E0420 01:05:34.518328 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-159256 node stop m02 -v=7 --alsologtostderr: (12.012665464s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-159256 status -v=7 --alsologtostderr: exit status 7 (773.98901ms)

                                                
                                                
-- stdout --
	ha-159256
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-159256-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-159256-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-159256-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:05:45.247426 1687581 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:05:45.247625 1687581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:05:45.247636 1687581 out.go:304] Setting ErrFile to fd 2...
	I0420 01:05:45.247642 1687581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:05:45.247891 1687581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	I0420 01:05:45.248093 1687581 out.go:298] Setting JSON to false
	I0420 01:05:45.248126 1687581 mustload.go:65] Loading cluster: ha-159256
	I0420 01:05:45.248205 1687581 notify.go:220] Checking for updates...
	I0420 01:05:45.249209 1687581 config.go:182] Loaded profile config "ha-159256": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:05:45.249236 1687581 status.go:255] checking status of ha-159256 ...
	I0420 01:05:45.249845 1687581 cli_runner.go:164] Run: docker container inspect ha-159256 --format={{.State.Status}}
	I0420 01:05:45.279545 1687581 status.go:330] ha-159256 host status = "Running" (err=<nil>)
	I0420 01:05:45.279579 1687581 host.go:66] Checking if "ha-159256" exists ...
	I0420 01:05:45.279878 1687581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159256
	I0420 01:05:45.305096 1687581 host.go:66] Checking if "ha-159256" exists ...
	I0420 01:05:45.305388 1687581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 01:05:45.305443 1687581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256
	I0420 01:05:45.325169 1687581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34690 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256/id_rsa Username:docker}
	I0420 01:05:45.423770 1687581 ssh_runner.go:195] Run: systemctl --version
	I0420 01:05:45.428478 1687581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:05:45.441646 1687581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 01:05:45.505887 1687581 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:72 SystemTime:2024-04-20 01:05:45.496622568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 01:05:45.506497 1687581 kubeconfig.go:125] found "ha-159256" server: "https://192.168.49.254:8443"
	I0420 01:05:45.506535 1687581 api_server.go:166] Checking apiserver status ...
	I0420 01:05:45.506587 1687581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:05:45.518084 1687581 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1419/cgroup
	I0420 01:05:45.532150 1687581 api_server.go:182] apiserver freezer: "13:freezer:/docker/9432785ebd3e48b7cae35953ca8636442d5943b3ad3a724262492a22f74c77fd/crio/crio-65d354cf9289190bb47c9bc10a0635be24c1608fda04d7f0ee9449d196b3230e"
	I0420 01:05:45.532218 1687581 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9432785ebd3e48b7cae35953ca8636442d5943b3ad3a724262492a22f74c77fd/crio/crio-65d354cf9289190bb47c9bc10a0635be24c1608fda04d7f0ee9449d196b3230e/freezer.state
	I0420 01:05:45.541917 1687581 api_server.go:204] freezer state: "THAWED"
	I0420 01:05:45.541949 1687581 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0420 01:05:45.549836 1687581 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0420 01:05:45.549865 1687581 status.go:422] ha-159256 apiserver status = Running (err=<nil>)
	I0420 01:05:45.549877 1687581 status.go:257] ha-159256 status: &{Name:ha-159256 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 01:05:45.549900 1687581 status.go:255] checking status of ha-159256-m02 ...
	I0420 01:05:45.550213 1687581 cli_runner.go:164] Run: docker container inspect ha-159256-m02 --format={{.State.Status}}
	I0420 01:05:45.566973 1687581 status.go:330] ha-159256-m02 host status = "Stopped" (err=<nil>)
	I0420 01:05:45.566997 1687581 status.go:343] host is not running, skipping remaining checks
	I0420 01:05:45.567005 1687581 status.go:257] ha-159256-m02 status: &{Name:ha-159256-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 01:05:45.567025 1687581 status.go:255] checking status of ha-159256-m03 ...
	I0420 01:05:45.567341 1687581 cli_runner.go:164] Run: docker container inspect ha-159256-m03 --format={{.State.Status}}
	I0420 01:05:45.582421 1687581 status.go:330] ha-159256-m03 host status = "Running" (err=<nil>)
	I0420 01:05:45.582455 1687581 host.go:66] Checking if "ha-159256-m03" exists ...
	I0420 01:05:45.582765 1687581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159256-m03
	I0420 01:05:45.606647 1687581 host.go:66] Checking if "ha-159256-m03" exists ...
	I0420 01:05:45.606949 1687581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 01:05:45.607007 1687581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m03
	I0420 01:05:45.623249 1687581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34700 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256-m03/id_rsa Username:docker}
	I0420 01:05:45.722753 1687581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:05:45.734752 1687581 kubeconfig.go:125] found "ha-159256" server: "https://192.168.49.254:8443"
	I0420 01:05:45.734783 1687581 api_server.go:166] Checking apiserver status ...
	I0420 01:05:45.734844 1687581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:05:45.746094 1687581 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1317/cgroup
	I0420 01:05:45.755429 1687581 api_server.go:182] apiserver freezer: "13:freezer:/docker/7ebcf16c2ba529ec9163760b380ae32b848a77d7567b9897ccf98d607c8b5acb/crio/crio-9add0a277d3301b9501d7d49f7cd36090985993182c50fb7499c100db10172e3"
	I0420 01:05:45.755506 1687581 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7ebcf16c2ba529ec9163760b380ae32b848a77d7567b9897ccf98d607c8b5acb/crio/crio-9add0a277d3301b9501d7d49f7cd36090985993182c50fb7499c100db10172e3/freezer.state
	I0420 01:05:45.764555 1687581 api_server.go:204] freezer state: "THAWED"
	I0420 01:05:45.764586 1687581 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0420 01:05:45.772522 1687581 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0420 01:05:45.772550 1687581 status.go:422] ha-159256-m03 apiserver status = Running (err=<nil>)
	I0420 01:05:45.772560 1687581 status.go:257] ha-159256-m03 status: &{Name:ha-159256-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 01:05:45.772589 1687581 status.go:255] checking status of ha-159256-m04 ...
	I0420 01:05:45.773994 1687581 cli_runner.go:164] Run: docker container inspect ha-159256-m04 --format={{.State.Status}}
	I0420 01:05:45.789583 1687581 status.go:330] ha-159256-m04 host status = "Running" (err=<nil>)
	I0420 01:05:45.789612 1687581 host.go:66] Checking if "ha-159256-m04" exists ...
	I0420 01:05:45.789910 1687581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-159256-m04
	I0420 01:05:45.809484 1687581 host.go:66] Checking if "ha-159256-m04" exists ...
	I0420 01:05:45.810069 1687581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 01:05:45.810133 1687581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-159256-m04
	I0420 01:05:45.826837 1687581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34705 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/ha-159256-m04/id_rsa Username:docker}
	I0420 01:05:45.927118 1687581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:05:45.940365 1687581 status.go:257] ha-159256-m04 status: &{Name:ha-159256-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 node start m02 -v=7 --alsologtostderr
E0420 01:05:54.999002 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-159256 node start m02 -v=7 --alsologtostderr: (42.055483414s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (196.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-159256 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-159256 -v=7 --alsologtostderr
E0420 01:06:35.959686 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-159256 -v=7 --alsologtostderr: (36.878478062s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-159256 --wait=true -v=7 --alsologtostderr
E0420 01:07:57.880375 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
E0420 01:09:35.965316 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-159256 --wait=true -v=7 --alsologtostderr: (2m39.174556672s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-159256
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (196.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-159256 node delete m03 -v=7 --alsologtostderr: (11.939678488s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 stop -v=7 --alsologtostderr
E0420 01:10:14.033118 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-159256 stop -v=7 --alsologtostderr: (35.859220591s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-159256 status -v=7 --alsologtostderr: exit status 7 (113.319948ms)

                                                
                                                
-- stdout --
	ha-159256
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-159256-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-159256-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:10:36.303103 1701563 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:10:36.303222 1701563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:10:36.303233 1701563 out.go:304] Setting ErrFile to fd 2...
	I0420 01:10:36.303239 1701563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:10:36.303488 1701563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	I0420 01:10:36.303673 1701563 out.go:298] Setting JSON to false
	I0420 01:10:36.303701 1701563 mustload.go:65] Loading cluster: ha-159256
	I0420 01:10:36.303809 1701563 notify.go:220] Checking for updates...
	I0420 01:10:36.304116 1701563 config.go:182] Loaded profile config "ha-159256": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:10:36.304127 1701563 status.go:255] checking status of ha-159256 ...
	I0420 01:10:36.304644 1701563 cli_runner.go:164] Run: docker container inspect ha-159256 --format={{.State.Status}}
	I0420 01:10:36.321020 1701563 status.go:330] ha-159256 host status = "Stopped" (err=<nil>)
	I0420 01:10:36.321044 1701563 status.go:343] host is not running, skipping remaining checks
	I0420 01:10:36.321052 1701563 status.go:257] ha-159256 status: &{Name:ha-159256 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 01:10:36.321085 1701563 status.go:255] checking status of ha-159256-m02 ...
	I0420 01:10:36.321448 1701563 cli_runner.go:164] Run: docker container inspect ha-159256-m02 --format={{.State.Status}}
	I0420 01:10:36.337783 1701563 status.go:330] ha-159256-m02 host status = "Stopped" (err=<nil>)
	I0420 01:10:36.337807 1701563 status.go:343] host is not running, skipping remaining checks
	I0420 01:10:36.337815 1701563 status.go:257] ha-159256-m02 status: &{Name:ha-159256-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 01:10:36.337838 1701563 status.go:255] checking status of ha-159256-m04 ...
	I0420 01:10:36.338158 1701563 cli_runner.go:164] Run: docker container inspect ha-159256-m04 --format={{.State.Status}}
	I0420 01:10:36.357419 1701563 status.go:330] ha-159256-m04 host status = "Stopped" (err=<nil>)
	I0420 01:10:36.357442 1701563 status.go:343] host is not running, skipping remaining checks
	I0420 01:10:36.357450 1701563 status.go:257] ha-159256-m04 status: &{Name:ha-159256-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (63.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-159256 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-159256 --control-plane -v=7 --alsologtostderr: (1m2.655502261s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-159256 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-159256 status -v=7 --alsologtostderr: (1.008677589s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (63.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-503087 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0420 01:14:35.965805 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-503087 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m16.370450539s)
--- PASS: TestJSONOutput/start/Command (76.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-503087 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-503087 --output=json --user=testUser
E0420 01:15:14.032756 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-503087 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-503087 --output=json --user=testUser: (5.972156473s)
--- PASS: TestJSONOutput/stop/Command (5.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-912990 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-912990 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.820102ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8ef54e5a-8865-4068-bbc8-1d6223a4b95f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-912990] minikube v1.33.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7775a11e-8f0f-453f-bf72-646d8a80ca79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18703"}}
	{"specversion":"1.0","id":"f90ab277-f5fc-45d9-ad76-f880def3042b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a861b890-d9ce-46c1-9056-d14452f5383d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig"}}
	{"specversion":"1.0","id":"c3876c23-0208-4b1a-9e06-9ea8dab6e501","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube"}}
	{"specversion":"1.0","id":"b6e0125a-4a3d-482c-a730-cb4cc305de86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"50ac97b7-a61c-4779-ae87-0a37ec96d4fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"94707889-782b-462f-ba7d-ed00be5489fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-912990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-912990
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.17s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-199707 --network=
E0420 01:15:59.009673 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-199707 --network=: (42.095215395s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-199707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-199707
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-199707: (2.042918912s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.17s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-704774 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-704774 --network=bridge: (35.203347408s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-704774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-704774
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-704774: (2.017666944s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.26s)

                                                
                                    
x
+
TestKicExistingNetwork (34.53s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-154920 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-154920 --network=existing-network: (32.441763964s)
helpers_test.go:175: Cleaning up "existing-network-154920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-154920
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-154920: (1.947433596s)
--- PASS: TestKicExistingNetwork (34.53s)

                                                
                                    
x
+
TestKicCustomSubnet (32.86s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-356620 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-356620 --subnet=192.168.60.0/24: (30.799636203s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-356620 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-356620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-356620
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-356620: (2.037581806s)
--- PASS: TestKicCustomSubnet (32.86s)

                                                
                                    
x
+
TestKicStaticIP (35.11s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-750445 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-750445 --static-ip=192.168.200.200: (32.976933425s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-750445 ip
helpers_test.go:175: Cleaning up "static-ip-750445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-750445
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-750445: (1.984735406s)
--- PASS: TestKicStaticIP (35.11s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-449639 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-449639 --driver=docker  --container-runtime=crio: (30.461177038s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-452298 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-452298 --driver=docker  --container-runtime=crio: (32.34171863s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-449639
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-452298
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-452298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-452298
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-452298: (1.919520543s)
helpers_test.go:175: Cleaning up "first-449639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-449639
E0420 01:19:35.965668 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-449639: (1.952374838s)
--- PASS: TestMinikubeProfile (67.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-208691 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-208691 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.092757086s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-208691 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-221113 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-221113 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.095151664s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-221113 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-208691 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-208691 --alsologtostderr -v=5: (1.602872398s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-221113 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-221113
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-221113: (1.210361456s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-221113
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-221113: (6.760812289s)
--- PASS: TestMountStart/serial/RestartStopped (7.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-221113 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (123.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-557268 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0420 01:20:14.032952 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
E0420 01:21:37.082185 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-557268 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m2.685450113s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (123.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-557268 -- rollout status deployment/busybox: (2.994202389s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- exec busybox-fc5497c4f-f4tr7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- exec busybox-fc5497c4f-jwsfz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- exec busybox-fc5497c4f-f4tr7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- exec busybox-fc5497c4f-jwsfz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- exec busybox-fc5497c4f-f4tr7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- exec busybox-fc5497c4f-jwsfz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- exec busybox-fc5497c4f-f4tr7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- exec busybox-fc5497c4f-f4tr7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- exec busybox-fc5497c4f-jwsfz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-557268 -- exec busybox-fc5497c4f-jwsfz -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-557268 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-557268 -v 3 --alsologtostderr: (46.888487658s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.56s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-557268 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 cp testdata/cp-test.txt multinode-557268:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 cp multinode-557268:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3554774798/001/cp-test_multinode-557268.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 cp multinode-557268:/home/docker/cp-test.txt multinode-557268-m02:/home/docker/cp-test_multinode-557268_multinode-557268-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268-m02 "sudo cat /home/docker/cp-test_multinode-557268_multinode-557268-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 cp multinode-557268:/home/docker/cp-test.txt multinode-557268-m03:/home/docker/cp-test_multinode-557268_multinode-557268-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268-m03 "sudo cat /home/docker/cp-test_multinode-557268_multinode-557268-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 cp testdata/cp-test.txt multinode-557268-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 cp multinode-557268-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3554774798/001/cp-test_multinode-557268-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 cp multinode-557268-m02:/home/docker/cp-test.txt multinode-557268:/home/docker/cp-test_multinode-557268-m02_multinode-557268.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268 "sudo cat /home/docker/cp-test_multinode-557268-m02_multinode-557268.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 cp multinode-557268-m02:/home/docker/cp-test.txt multinode-557268-m03:/home/docker/cp-test_multinode-557268-m02_multinode-557268-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268-m03 "sudo cat /home/docker/cp-test_multinode-557268-m02_multinode-557268-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 cp testdata/cp-test.txt multinode-557268-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 cp multinode-557268-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3554774798/001/cp-test_multinode-557268-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 cp multinode-557268-m03:/home/docker/cp-test.txt multinode-557268:/home/docker/cp-test_multinode-557268-m03_multinode-557268.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268 "sudo cat /home/docker/cp-test_multinode-557268-m03_multinode-557268.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 cp multinode-557268-m03:/home/docker/cp-test.txt multinode-557268-m02:/home/docker/cp-test_multinode-557268-m03_multinode-557268-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 ssh -n multinode-557268-m02 "sudo cat /home/docker/cp-test_multinode-557268-m03_multinode-557268-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-557268 node stop m03: (1.225995532s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-557268 status: exit status 7 (538.439481ms)

                                                
                                                
-- stdout --
	multinode-557268
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-557268-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-557268-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-557268 status --alsologtostderr: exit status 7 (521.300853ms)

                                                
                                                
-- stdout --
	multinode-557268
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-557268-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-557268-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:23:16.652716 1753052 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:23:16.652869 1753052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:23:16.652880 1753052 out.go:304] Setting ErrFile to fd 2...
	I0420 01:23:16.652886 1753052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:23:16.653144 1753052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	I0420 01:23:16.653333 1753052 out.go:298] Setting JSON to false
	I0420 01:23:16.653363 1753052 mustload.go:65] Loading cluster: multinode-557268
	I0420 01:23:16.653419 1753052 notify.go:220] Checking for updates...
	I0420 01:23:16.653818 1753052 config.go:182] Loaded profile config "multinode-557268": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:23:16.653842 1753052 status.go:255] checking status of multinode-557268 ...
	I0420 01:23:16.654329 1753052 cli_runner.go:164] Run: docker container inspect multinode-557268 --format={{.State.Status}}
	I0420 01:23:16.672727 1753052 status.go:330] multinode-557268 host status = "Running" (err=<nil>)
	I0420 01:23:16.672765 1753052 host.go:66] Checking if "multinode-557268" exists ...
	I0420 01:23:16.673080 1753052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-557268
	I0420 01:23:16.691460 1753052 host.go:66] Checking if "multinode-557268" exists ...
	I0420 01:23:16.691771 1753052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 01:23:16.691832 1753052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-557268
	I0420 01:23:16.717776 1753052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34810 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/multinode-557268/id_rsa Username:docker}
	I0420 01:23:16.815029 1753052 ssh_runner.go:195] Run: systemctl --version
	I0420 01:23:16.819656 1753052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:23:16.831509 1753052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 01:23:16.886497 1753052 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-20 01:23:16.875657016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 01:23:16.887107 1753052 kubeconfig.go:125] found "multinode-557268" server: "https://192.168.67.2:8443"
	I0420 01:23:16.887153 1753052 api_server.go:166] Checking apiserver status ...
	I0420 01:23:16.887213 1753052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0420 01:23:16.898706 1753052 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup
	I0420 01:23:16.908100 1753052 api_server.go:182] apiserver freezer: "13:freezer:/docker/15e9dabdf5e3af81b13cb8a9452901c510d125e71e9403ea4f95ff6a34668f6c/crio/crio-17ea38f8ae51a5059a9f88c066b7789fdbf9ed5134742a19dd173ff04d519df7"
	I0420 01:23:16.908172 1753052 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/15e9dabdf5e3af81b13cb8a9452901c510d125e71e9403ea4f95ff6a34668f6c/crio/crio-17ea38f8ae51a5059a9f88c066b7789fdbf9ed5134742a19dd173ff04d519df7/freezer.state
	I0420 01:23:16.917024 1753052 api_server.go:204] freezer state: "THAWED"
	I0420 01:23:16.917054 1753052 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0420 01:23:16.924770 1753052 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0420 01:23:16.924799 1753052 status.go:422] multinode-557268 apiserver status = Running (err=<nil>)
	I0420 01:23:16.924810 1753052 status.go:257] multinode-557268 status: &{Name:multinode-557268 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 01:23:16.924827 1753052 status.go:255] checking status of multinode-557268-m02 ...
	I0420 01:23:16.925138 1753052 cli_runner.go:164] Run: docker container inspect multinode-557268-m02 --format={{.State.Status}}
	I0420 01:23:16.943214 1753052 status.go:330] multinode-557268-m02 host status = "Running" (err=<nil>)
	I0420 01:23:16.943239 1753052 host.go:66] Checking if "multinode-557268-m02" exists ...
	I0420 01:23:16.943554 1753052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-557268-m02
	I0420 01:23:16.958769 1753052 host.go:66] Checking if "multinode-557268-m02" exists ...
	I0420 01:23:16.959090 1753052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0420 01:23:16.959131 1753052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-557268-m02
	I0420 01:23:16.975633 1753052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34815 SSHKeyPath:/home/jenkins/minikube-integration/18703-1638187/.minikube/machines/multinode-557268-m02/id_rsa Username:docker}
	I0420 01:23:17.075125 1753052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0420 01:23:17.087810 1753052 status.go:257] multinode-557268-m02 status: &{Name:multinode-557268-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0420 01:23:17.087845 1753052 status.go:255] checking status of multinode-557268-m03 ...
	I0420 01:23:17.088157 1753052 cli_runner.go:164] Run: docker container inspect multinode-557268-m03 --format={{.State.Status}}
	I0420 01:23:17.103595 1753052 status.go:330] multinode-557268-m03 host status = "Stopped" (err=<nil>)
	I0420 01:23:17.103622 1753052 status.go:343] host is not running, skipping remaining checks
	I0420 01:23:17.103630 1753052 status.go:257] multinode-557268-m03 status: &{Name:multinode-557268-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-557268 node start m03 -v=7 --alsologtostderr: (9.10769936s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-557268
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-557268
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-557268: (24.812421264s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-557268 --wait=true -v=8 --alsologtostderr
E0420 01:24:35.965596 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-557268 --wait=true -v=8 --alsologtostderr: (56.337962575s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-557268
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-557268 node delete m03: (4.585553692s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 stop
E0420 01:25:14.032798 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-557268 stop: (23.673997814s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-557268 status: exit status 7 (95.988762ms)

                                                
                                                
-- stdout --
	multinode-557268
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-557268-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-557268 status --alsologtostderr: exit status 7 (88.927191ms)

                                                
                                                
-- stdout --
	multinode-557268
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-557268-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:25:17.391983 1760145 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:25:17.392201 1760145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:25:17.392229 1760145 out.go:304] Setting ErrFile to fd 2...
	I0420 01:25:17.392249 1760145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:25:17.392529 1760145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	I0420 01:25:17.392764 1760145 out.go:298] Setting JSON to false
	I0420 01:25:17.392818 1760145 mustload.go:65] Loading cluster: multinode-557268
	I0420 01:25:17.392929 1760145 notify.go:220] Checking for updates...
	I0420 01:25:17.393298 1760145 config.go:182] Loaded profile config "multinode-557268": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:25:17.393344 1760145 status.go:255] checking status of multinode-557268 ...
	I0420 01:25:17.393962 1760145 cli_runner.go:164] Run: docker container inspect multinode-557268 --format={{.State.Status}}
	I0420 01:25:17.410035 1760145 status.go:330] multinode-557268 host status = "Stopped" (err=<nil>)
	I0420 01:25:17.410057 1760145 status.go:343] host is not running, skipping remaining checks
	I0420 01:25:17.410064 1760145 status.go:257] multinode-557268 status: &{Name:multinode-557268 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0420 01:25:17.410107 1760145 status.go:255] checking status of multinode-557268-m02 ...
	I0420 01:25:17.410424 1760145 cli_runner.go:164] Run: docker container inspect multinode-557268-m02 --format={{.State.Status}}
	I0420 01:25:17.427441 1760145 status.go:330] multinode-557268-m02 host status = "Stopped" (err=<nil>)
	I0420 01:25:17.427464 1760145 status.go:343] host is not running, skipping remaining checks
	I0420 01:25:17.427471 1760145 status.go:257] multinode-557268-m02 status: &{Name:multinode-557268-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-557268 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-557268 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (51.099282274s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-557268 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.79s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-557268
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-557268-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-557268-m02 --driver=docker  --container-runtime=crio: exit status 14 (86.003626ms)

                                                
                                                
-- stdout --
	* [multinode-557268-m02] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-557268-m02' is duplicated with machine name 'multinode-557268-m02' in profile 'multinode-557268'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-557268-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-557268-m03 --driver=docker  --container-runtime=crio: (34.027862735s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-557268
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-557268: exit status 80 (365.86843ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-557268 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-557268-m03 already exists in multinode-557268-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-557268-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-557268-m03: (2.004079135s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.55s)

                                                
                                    
x
+
TestPreload (117.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-510758 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-510758 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m25.337822186s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-510758 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-510758 image pull gcr.io/k8s-minikube/busybox: (1.791185787s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-510758
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-510758: (5.781779613s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-510758 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-510758 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (22.050108341s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-510758 image list
helpers_test.go:175: Cleaning up "test-preload-510758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-510758
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-510758: (2.368396898s)
--- PASS: TestPreload (117.66s)

                                                
                                    
x
+
TestScheduledStopUnix (108.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-812037 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-812037 --memory=2048 --driver=docker  --container-runtime=crio: (31.547624556s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-812037 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-812037 -n scheduled-stop-812037
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-812037 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-812037 --cancel-scheduled
E0420 01:29:35.965003 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-812037 -n scheduled-stop-812037
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-812037
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-812037 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0420 01:30:14.034958 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-812037
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-812037: exit status 7 (76.355354ms)

                                                
                                                
-- stdout --
	scheduled-stop-812037
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-812037 -n scheduled-stop-812037
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-812037 -n scheduled-stop-812037: exit status 7 (79.526651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-812037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-812037
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-812037: (4.943895061s)
--- PASS: TestScheduledStopUnix (108.17s)

                                                
                                    
x
+
TestInsufficientStorage (10.47s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-032542 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-032542 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.897383775s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"789bf7fe-ef9d-4936-84ed-c5a30a34579b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-032542] minikube v1.33.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6109c78d-7b27-4ad0-b1b2-e68a52938ecf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18703"}}
	{"specversion":"1.0","id":"2c4578e0-0cac-494b-af2f-8e4dc772a412","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8073a066-9adb-4543-bb12-f4ae7cc4cf28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig"}}
	{"specversion":"1.0","id":"e6e92e53-2bc6-4252-ac80-b35980c881fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube"}}
	{"specversion":"1.0","id":"e8e40cdc-5117-47b9-9718-b03b26f32e70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f1e7c064-ba4c-482e-b307-f41de78d0195","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"68f3fe3d-a212-4604-b735-e0b21e97f1e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"85092c3e-ba3b-4517-bf19-48ecad058898","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1deea150-b1aa-4909-a438-4c937d0664fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"094749b3-3223-4121-a81d-59edf00a1ee1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"84d9a454-3db4-4e7d-805c-6cf48324eeb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-032542\" primary control-plane node in \"insufficient-storage-032542\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"61426a3f-c95f-48b9-90ca-8017620a59d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5492c414-9f1d-44c6-ad68-e685b12f73eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"683c854c-adb6-439c-9142-bf43099269f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-032542 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-032542 --output=json --layout=cluster: exit status 7 (280.041976ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-032542","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-032542","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:30:43.859847 1776624 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-032542" does not appear in /home/jenkins/minikube-integration/18703-1638187/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-032542 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-032542 --output=json --layout=cluster: exit status 7 (293.971001ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-032542","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-032542","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0420 01:30:44.155682 1776678 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-032542" does not appear in /home/jenkins/minikube-integration/18703-1638187/kubeconfig
	E0420 01:30:44.166221 1776678 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/insufficient-storage-032542/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-032542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-032542
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-032542: (1.994473069s)
--- PASS: TestInsufficientStorage (10.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (73.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.451689632 start -p running-upgrade-645713 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0420 01:34:35.965246 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.451689632 start -p running-upgrade-645713 --memory=2200 --vm-driver=docker  --container-runtime=crio: (36.382306927s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-645713 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0420 01:35:14.033111 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-645713 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.423400401s)
helpers_test.go:175: Cleaning up "running-upgrade-645713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-645713
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-645713: (3.034985446s)
--- PASS: TestRunningBinaryUpgrade (73.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (382.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-286500 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-286500 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m6.230433086s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-286500
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-286500: (1.38837996s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-286500 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-286500 status --format={{.Host}}: exit status 7 (106.6056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-286500 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-286500 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m37.060141302s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-286500 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-286500 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-286500 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (92.745697ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-286500] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-286500
	    minikube start -p kubernetes-upgrade-286500 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2865002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-286500 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-286500 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-286500 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.729601562s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-286500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-286500
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-286500: (2.387445036s)
--- PASS: TestKubernetesUpgrade (382.12s)

                                                
                                    
x
+
TestMissingContainerUpgrade (151.57s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1713951883 start -p missing-upgrade-869736 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1713951883 start -p missing-upgrade-869736 --memory=2200 --driver=docker  --container-runtime=crio: (1m17.073634675s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-869736
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-869736: (10.429408508s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-869736
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-869736 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0420 01:32:39.010728 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-869736 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (59.62512003s)
helpers_test.go:175: Cleaning up "missing-upgrade-869736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-869736
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-869736: (3.171535443s)
--- PASS: TestMissingContainerUpgrade (151.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-209558 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-209558 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (85.530539ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-209558] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-209558 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-209558 --driver=docker  --container-runtime=crio: (37.020984326s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-209558 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-209558 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-209558 --no-kubernetes --driver=docker  --container-runtime=crio: (6.213522838s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-209558 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-209558 status -o json: exit status 2 (579.437548ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-209558","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-209558
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-209558: (2.248166366s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-209558 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-209558 --no-kubernetes --driver=docker  --container-runtime=crio: (7.234837683s)
--- PASS: TestNoKubernetes/serial/Start (7.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-209558 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-209558 "sudo systemctl is-active --quiet service kubelet": exit status 1 (378.242054ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-209558
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-209558: (1.26531678s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-209558 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-209558 --driver=docker  --container-runtime=crio: (8.437544779s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-209558 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-209558 "sudo systemctl is-active --quiet service kubelet": exit status 1 (480.601393ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (65.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3458440546 start -p stopped-upgrade-566162 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3458440546 start -p stopped-upgrade-566162 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.27983437s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3458440546 -p stopped-upgrade-566162 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3458440546 -p stopped-upgrade-566162 stop: (2.561973511s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-566162 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-566162 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.140011247s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (65.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-566162
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-566162: (1.198181354s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestPause/serial/Start (74.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-516826 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-516826 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m14.341828162s)
--- PASS: TestPause/serial/Start (74.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.35s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-516826 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-516826 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.337333761s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.35s)

                                                
                                    
x
+
TestPause/serial/Pause (1.07s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-516826 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-516826 --alsologtostderr -v=5: (1.071390655s)
--- PASS: TestPause/serial/Pause (1.07s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-516826 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-516826 --output=json --layout=cluster: exit status 2 (436.909632ms)

                                                
                                                
-- stdout --
	{"Name":"pause-516826","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-516826","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-516826 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.95s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.22s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-516826 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-516826 --alsologtostderr -v=5: (1.223045692s)
--- PASS: TestPause/serial/PauseAgain (1.22s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.09s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-516826 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-516826 --alsologtostderr -v=5: (3.091188478s)
--- PASS: TestPause/serial/DeletePaused (3.09s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-516826
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-516826: exit status 1 (16.576712ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-516826: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-386626 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-386626 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (174.957758ms)

                                                
                                                
-- stdout --
	* [false-386626] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18703
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0420 01:38:20.817348 1815508 out.go:291] Setting OutFile to fd 1 ...
	I0420 01:38:20.817471 1815508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:38:20.817516 1815508 out.go:304] Setting ErrFile to fd 2...
	I0420 01:38:20.817522 1815508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0420 01:38:20.817782 1815508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18703-1638187/.minikube/bin
	I0420 01:38:20.818188 1815508 out.go:298] Setting JSON to false
	I0420 01:38:20.819080 1815508 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30048,"bootTime":1713547053,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0420 01:38:20.819151 1815508 start.go:139] virtualization:  
	I0420 01:38:20.821883 1815508 out.go:177] * [false-386626] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0420 01:38:20.824276 1815508 out.go:177]   - MINIKUBE_LOCATION=18703
	I0420 01:38:20.824367 1815508 notify.go:220] Checking for updates...
	I0420 01:38:20.828013 1815508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0420 01:38:20.829837 1815508 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18703-1638187/kubeconfig
	I0420 01:38:20.831700 1815508 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18703-1638187/.minikube
	I0420 01:38:20.833345 1815508 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0420 01:38:20.834767 1815508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0420 01:38:20.837066 1815508 config.go:182] Loaded profile config "force-systemd-flag-796769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0420 01:38:20.837166 1815508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0420 01:38:20.855873 1815508 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0420 01:38:20.855998 1815508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0420 01:38:20.921788 1815508 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-20 01:38:20.912882143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0420 01:38:20.921893 1815508 docker.go:295] overlay module found
	I0420 01:38:20.924101 1815508 out.go:177] * Using the docker driver based on user configuration
	I0420 01:38:20.926047 1815508 start.go:297] selected driver: docker
	I0420 01:38:20.926063 1815508 start.go:901] validating driver "docker" against <nil>
	I0420 01:38:20.926076 1815508 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0420 01:38:20.928498 1815508 out.go:177] 
	W0420 01:38:20.930595 1815508 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0420 01:38:20.932855 1815508 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-386626 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-386626

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-386626

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-386626

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-386626

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-386626

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-386626

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-386626

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-386626

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-386626

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-386626

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-386626

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-386626" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-386626" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-386626

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-386626"

                                                
                                                
----------------------- debugLogs end: false-386626 [took: 4.663926328s] --------------------------------
helpers_test.go:175: Cleaning up "false-386626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-386626
--- PASS: TestNetworkPlugins/group/false (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (149.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-646137 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0420 01:40:14.032812 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-646137 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m29.284499141s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (149.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-646137 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8edf59c4-c929-4e45-83dd-ca47b2090bbe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8edf59c4-c929-4e45-83dd-ca47b2090bbe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004317892s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-646137 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-646137 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-646137 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-646137 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-646137 --alsologtostderr -v=3: (11.994821122s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-646137 -n old-k8s-version-646137
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-646137 -n old-k8s-version-646137: exit status 7 (119.96802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-646137 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (140.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-646137 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-646137 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m20.532434212s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-646137 -n old-k8s-version-646137
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (140.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-239314 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-239314 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (1m23.387183331s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-239314 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [278fd347-e79a-4480-ba54-e80af72be5b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [278fd347-e79a-4480-ba54-e80af72be5b7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003951476s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-239314 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-239314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-239314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.059208957s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-239314 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-239314 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-239314 --alsologtostderr -v=3: (11.944577219s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-239314 -n embed-certs-239314
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-239314 -n embed-certs-239314: exit status 7 (78.632884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-239314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-239314 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
E0420 01:44:35.965622 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-239314 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (4m25.878091338s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-239314 -n embed-certs-239314
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-2xgvm" [12f7abd2-0850-46da-890f-9f0e066ce280] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.01183521s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-2xgvm" [12f7abd2-0850-46da-890f-9f0e066ce280] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004238171s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-646137 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-646137 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-646137 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-646137 -n old-k8s-version-646137
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-646137 -n old-k8s-version-646137: exit status 2 (341.406627ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-646137 -n old-k8s-version-646137
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-646137 -n old-k8s-version-646137: exit status 2 (335.342933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-646137 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-646137 -n old-k8s-version-646137
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-646137 -n old-k8s-version-646137
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-824036 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-824036 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (1m6.004445706s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-824036 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [90c4864b-9e0f-4bd4-819c-c7e0b077fff8] Pending
helpers_test.go:344: "busybox" [90c4864b-9e0f-4bd4-819c-c7e0b077fff8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [90c4864b-9e0f-4bd4-819c-c7e0b077fff8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003328598s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-824036 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-824036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-824036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.083627214s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-824036 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-824036 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-824036 --alsologtostderr -v=3: (12.01298884s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-824036 -n no-preload-824036
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-824036 -n no-preload-824036: exit status 7 (89.032232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-824036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-824036 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
E0420 01:47:12.038430 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:47:12.043734 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:47:12.053982 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:47:12.074211 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:47:12.114557 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:47:12.194876 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:47:12.355159 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:47:12.675810 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:47:13.316122 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:47:14.596991 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:47:17.157641 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:47:22.277944 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:47:32.519064 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:47:52.999958 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:48:33.961678 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-824036 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (4m26.877038745s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-824036 -n no-preload-824036
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-42mtb" [3d2ce63a-7800-43f3-ad29-dd642242618b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00315451s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-42mtb" [3d2ce63a-7800-43f3-ad29-dd642242618b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003461194s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-239314 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-239314 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-239314 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-239314 -n embed-certs-239314
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-239314 -n embed-certs-239314: exit status 2 (341.086557ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-239314 -n embed-certs-239314
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-239314 -n embed-certs-239314: exit status 2 (333.395489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-239314 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-239314 -n embed-certs-239314
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-239314 -n embed-certs-239314
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-077068 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
E0420 01:49:19.011447 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 01:49:35.967207 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 01:49:55.882615 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
E0420 01:50:14.033026 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-077068 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (1m17.190401073s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-077068 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a0b3d9cb-bf86-47d8-af8f-49719873568d] Pending
helpers_test.go:344: "busybox" [a0b3d9cb-bf86-47d8-af8f-49719873568d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a0b3d9cb-bf86-47d8-af8f-49719873568d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003650704s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-077068 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-077068 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-077068 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-077068 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-077068 --alsologtostderr -v=3: (11.947148784s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-077068 -n default-k8s-diff-port-077068
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-077068 -n default-k8s-diff-port-077068: exit status 7 (98.453975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-077068 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-077068 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-077068 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (4m28.121420472s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-077068 -n default-k8s-diff-port-077068
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-vwkd6" [097a4370-ca4e-4d4c-bb9d-26e751277acf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004293205s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-vwkd6" [097a4370-ca4e-4d4c-bb9d-26e751277acf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00593659s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-824036 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-824036 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-824036 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-824036 --alsologtostderr -v=1: (1.205256307s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-824036 -n no-preload-824036
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-824036 -n no-preload-824036: exit status 2 (407.733996ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-824036 -n no-preload-824036
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-824036 -n no-preload-824036: exit status 2 (582.215475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-824036 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-824036 --alsologtostderr -v=1: (1.275898172s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-824036 -n no-preload-824036
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-824036 -n no-preload-824036
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-718050 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
E0420 01:52:12.038463 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/old-k8s-version-646137/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-718050 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (47.010285876s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-718050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-718050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.144313505s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-718050 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-718050 --alsologtostderr -v=3: (1.295380598s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718050 -n newest-cni-718050
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718050 -n newest-cni-718050: exit status 7 (74.373851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-718050 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-718050 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-718050 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (16.257499034s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718050 -n newest-cni-718050
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-718050 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-718050 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-718050 -n newest-cni-718050
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-718050 -n newest-cni-718050: exit status 2 (336.357527ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-718050 -n newest-cni-718050
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-718050 -n newest-cni-718050: exit status 2 (330.43335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-718050 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-718050 -n newest-cni-718050
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-718050 -n newest-cni-718050
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m18.127880951s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-386626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-386626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-v78hb" [000bf871-29bb-4a16-935f-855667f49ef0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-v78hb" [000bf871-29bb-4a16-935f-855667f49ef0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00443979s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-386626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0420 01:54:35.964953 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
E0420 01:54:57.083280 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
E0420 01:55:14.032715 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m20.146531305s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-9h7nm" [d4e64a80-5685-43d5-b9cb-4a4af85641c6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003371509s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-9h7nm" [d4e64a80-5685-43d5-b9cb-4a4af85641c6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004177155s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-077068 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-077068 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-077068 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-077068 -n default-k8s-diff-port-077068
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-077068 -n default-k8s-diff-port-077068: exit status 2 (315.124351ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-077068 -n default-k8s-diff-port-077068
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-077068 -n default-k8s-diff-port-077068: exit status 2 (333.115618ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-077068 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-077068 -n default-k8s-diff-port-077068
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-077068 -n default-k8s-diff-port-077068
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.09s)
E0420 02:00:14.032664 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
E0420 02:00:22.505720 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
E0420 02:00:27.159909 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory
E0420 02:00:27.165161 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory
E0420 02:00:27.175420 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory
E0420 02:00:27.195772 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory
E0420 02:00:27.236105 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory
E0420 02:00:27.316359 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory
E0420 02:00:27.477079 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory
E0420 02:00:27.797639 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory
E0420 02:00:28.438001 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory
E0420 02:00:29.718533 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory
E0420 02:00:32.279254 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory
E0420 02:00:37.399866 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory
E0420 02:00:47.640355 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory
E0420 02:00:53.125451 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/kindnet-386626/client.crt: no such file or directory
E0420 02:00:53.130790 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/kindnet-386626/client.crt: no such file or directory
E0420 02:00:53.141133 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/kindnet-386626/client.crt: no such file or directory
E0420 02:00:53.161723 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/kindnet-386626/client.crt: no such file or directory
E0420 02:00:53.202099 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/kindnet-386626/client.crt: no such file or directory
E0420 02:00:53.282486 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/kindnet-386626/client.crt: no such file or directory
E0420 02:00:53.442869 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/kindnet-386626/client.crt: no such file or directory
E0420 02:00:53.763439 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/kindnet-386626/client.crt: no such file or directory
E0420 02:00:54.404513 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/kindnet-386626/client.crt: no such file or directory
E0420 02:00:55.685512 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/kindnet-386626/client.crt: no such file or directory
E0420 02:00:58.246201 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/kindnet-386626/client.crt: no such file or directory
E0420 02:01:03.366761 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/kindnet-386626/client.crt: no such file or directory
E0420 02:01:08.121123 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/default-k8s-diff-port-077068/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m17.3484693s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mqlnk" [1d61579e-668b-49ef-b8b7-9f2b1e3e31b9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005067712s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-386626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-386626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6ldvx" [d0a97edb-63ea-4805-8e37-48a597d8611c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6ldvx" [d0a97edb-63ea-4805-8e37-48a597d8611c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004254834s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-386626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0420 01:56:41.028203 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/no-preload-824036/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m7.588631961s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fcp4j" [bb54fd72-46ee-430e-b631-9f67f9d6f63f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007002754s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-386626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-386626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-r8vhg" [9bde0948-98da-4fc5-a9df-2f18b7de4dd0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0420 01:57:01.508704 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/no-preload-824036/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-r8vhg" [9bde0948-98da-4fc5-a9df-2f18b7de4dd0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003853041s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-386626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0420 01:57:42.469141 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/no-preload-824036/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m29.011311177s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-386626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-386626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9hwml" [3268d472-f8fb-4e04-b19f-93df5cea12c8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9hwml" [3268d472-f8fb-4e04-b19f-93df5cea12c8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003444328s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-386626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0420 01:59:00.582243 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
E0420 01:59:00.587515 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
E0420 01:59:00.597848 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
E0420 01:59:00.618125 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
E0420 01:59:00.658421 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
E0420 01:59:00.738789 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
E0420 01:59:00.899232 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
E0420 01:59:01.219782 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
E0420 01:59:01.860874 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
E0420 01:59:03.141893 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
E0420 01:59:04.389788 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/no-preload-824036/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.588868426s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-386626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-386626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-sswxj" [5f8cbeac-7e77-4dee-835f-7552503f289b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0420 01:59:05.702558 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
E0420 01:59:10.823449 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-sswxj" [5f8cbeac-7e77-4dee-835f-7552503f289b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003729471s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-386626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wskww" [6c6ad439-1bd0-42a8-93b2-ab916a20d619] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006305846s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-386626 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-386626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vxwbx" [ffd7ba3a-7423-4d6e-8b42-866f67e88455] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0420 01:59:35.965011 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/addons-747503/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-vxwbx" [ffd7ba3a-7423-4d6e-8b42-866f67e88455] Running
E0420 01:59:41.545137 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/auto-386626/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.012516226s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-386626 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m33.763288852s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-386626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-386626 "pgrep -a kubelet"
E0420 02:01:13.607606 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/kindnet-386626/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-386626 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rrv6n" [e61e4460-51bd-4c3f-a2da-d3690b6d2762] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rrv6n" [e61e4460-51bd-4c3f-a2da-d3690b6d2762] Running
E0420 02:01:20.540898 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/no-preload-824036/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003487407s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-386626 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-386626 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (29/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-407942 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-407942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-407942
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-612083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-612083
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
E0420 01:38:17.083073 1643623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/functional-756660/client.crt: no such file or directory
panic.go:626: 
----------------------- debugLogs start: kubenet-386626 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-386626

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-386626

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-386626

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-386626

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-386626

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-386626

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-386626

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-386626

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-386626

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-386626

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-386626

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-386626" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-386626" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18703-1638187/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 20 Apr 2024 01:38:17 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-flag-796769
contexts:
- context:
cluster: force-systemd-flag-796769
extensions:
- extension:
last-update: Sat, 20 Apr 2024 01:38:17 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: force-systemd-flag-796769
name: force-systemd-flag-796769
current-context: force-systemd-flag-796769
kind: Config
preferences: {}
users:
- name: force-systemd-flag-796769
user:
client-certificate: /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/force-systemd-flag-796769/client.crt
client-key: /home/jenkins/minikube-integration/18703-1638187/.minikube/profiles/force-systemd-flag-796769/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-386626

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-386626"

                                                
                                                
----------------------- debugLogs end: kubenet-386626 [took: 5.057752485s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-386626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-386626
--- SKIP: TestNetworkPlugins/group/kubenet (5.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-386626 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-386626" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-386626

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-386626" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-386626"

                                                
                                                
----------------------- debugLogs end: cilium-386626 [took: 5.794933667s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-386626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-386626
--- SKIP: TestNetworkPlugins/group/cilium (5.97s)

                                                
                                    
Copied to clipboard